Is this something Audacity-ish or going in another direction? (Just curious, really.)
All I really have is "food for thought" here.
There are Native mechanisms available, namely in CoronaGraphics.h, that let you provide your own pixel data (one-channel masks, RGB, or RGBA). At key points this will be uploaded to the GPU, but meanwhile what you see in Lua is fairly ordinary, e.g. look into canvas textures. Plugins are also able to build on this, of course.
A "CoronaAudio.h" in a similar vein might be worthwhile, allowing say to provide PCM samples. This would let you do any heavy-duty or just out-of-the-ordinary processing on your own end and then provide the final results. (In particular, you would unravel the superposition of cosines and sines that resulted in your samples, putting you in the frequency domain, apply your changes there, then convert back.) Helper functions for getting the data out of known file formats would also be handy.
At the moment I believe most of Corona's audio logic happens in ALmixer, though I haven't explored much. The alBufferData() calls are what actually submit samples for playback--though as the name implies that's not necessarily immediate--so an "external sound source" would probably try to dovetail with one or more of those calls.
(Thinking about it, since it's modeled after an SDL library, it uses an ALmixer_RWops facility, for generalizing the IO: read from file, memory, network, etc. That's probably a good way in.)