OpenGL depth testing
Aug 08 2019 03:04 PM
To the point, my digging has determined that the OpenGL term for this is a "polygon_offset", and I've just stumbled onto this file within the Corona source which appears to hint at an implementation of some sort?
Am I barking up the wrong tree, or is depth testing already actually incorporated, at least to some extent? Perhaps the renderer is capable of this but the newPolygon call doesn't currently use it, or does but doesn't expose the capability to Lua?
If somebody more competent than me doesn't mind doing a little digging themselves I'd appreciate the feedback. Or if somebody at Coronalabs could explain how this works?
Yes. Or at least, polygon offsets are rather unimportant as far as this stuff goes. They come into play with stuff like "decals": stickers, graffiti, etc. that might coincide with polygons but exist apart from their texturing and so on.
As I also mentioned in some correspondence to you, this PR (Windows and Mac so far) lets you manage some stencil state. It's a bit unusual in that you make "display" objects with group "scope" to flip these states, but was intended to not to be too intrusive. The PR is a little messy since I apparently don't know how to properly do Git topic branches; anyhow, the commits starting from "Fumbling beginnings of custom commands..." make up the feature.
(Vlad suggested we probably need to figure out whether new changes like this will complicate adding certain platforms. I've been investigating this. It looks like it might indeed need a bit of tweaking, but I haven't yet done the deep dive and made a proof of concept for the particular target in question, despite some prep work.)
A lot of the depth state looks like the stencil state. If you look through some of the "custom command" files, I've even got those states stubbed out.
There is a z-coordinate in the Corona geometry, with a few notes suggesting some 3D experiments. (I mentioned in Slack earlier how this could be exploited in another way; furthermore I scribbled out some notes with assorted analysis on all this here.)
Would you be willing to do local (engine) builds to try stuff out?
Unfortunately although from our previous conversation I loosely follow what you're doing, I'm just not OpenGL-savvy enough to understand very much of it. Would your approach allow for individual pixel depths to be calculated and masked out based on vertex-defined depths? Or would the entire polygon be given one depth value as a display object?
Per-vertex is the only way that really makes sense.
Typically in an OpenGL application you would have "object space" coordinates: where all the vertices are in relation to the origin and oriented along the standard x, y, z directions. It's then common to map these into world space (rotating into different x, y, z; scaling; translating from the origin) and then un-map the result into the eye or camera's object space. (Model-view matrix.) The depth is found here and will be camera-relative, with some cutoff distance. (Projection matrix comes in here.)
With polygons and meshes it might make sense to just pass in (x, y, z) triples rather than (x, y) pairs, since you wouldn't need to torture the interface, just set some z flag somewhere. Not sure about other objects.
I've been considering implementing my own depth buffer to then handle intersecting polygons. I already have z's for each vertex relative to the screen, so it'd technically be possible to render the polygon to a texture, iterate the rendered pixels and calculate depths based on the vertex depths, then compare those depths against an array of previously calculated depths and either update the array if the new pixel is closer, or render a dot to a new mask texture if the array value is closer. Finally the polygon and it's mask could be copied over to screen space.
This approach would I'm sure be slow though, compared to leaving OpenGL to do the same. For a start I'm guessing OpenGL does this at hardware level and with a single render pass, rather than drawing everything and then poking holes. Hence why I'm now thinking that if the polygon drawing code in Corona itself could just be extended to allow passing those vertex z's, it could in turn pass over to OpenGL for depth testing at that level instead.