During some discussion a week or so ago I mentioned that I had a post-processing setup in one of my projects, but that it was sort of tied up in that. I've managed since last night to put together a couple examples, so I figure I'll post them here too.
Basically, you assign whatever groups / objects you want and they get captured to a screen-size canvas, which you then slap onto a rect. So far it's like a snapshot. But then above that, you also have objects whose shaders read from that same canvas, using their on-screen position to access it. This lets you incorporate the screen contents into generic effects.
In the first example I move a couple objects around, hoping to make it more obvious that the shader isn't just using the angel image directly. I didn't do this in the second example, but the same is true there.
The first example makes a bunch of little dummy geometry and rotates them to add some additional distortion (the IQ noise algorithm incorporates the current texture coordinate, so this shakes it up a bit).
The second example uses Corona's built-in filters. Unfortunately these are a bit limited, since you have to manually compute the texture coordinates (a leaky implementation detail from multi-pass shaders), so without a bit of work it needs to be axis-aligned. But under that constraint it works.
I have a (currently vague) mesh-based idea in mind too, so that might follow.
There are some details with removing the canvas + rect, e.g. if you need to unload a scene. Maybe I'll add an example with that as well.