Jump to content
Yes. Or at least, polygon offsets are rather unimportant as far as this stuff goes. They come into play with stuff like "decals": stickers, graffiti, etc. that might coincide with polygons but exist apart from their texturing and so on.
As I also mentioned in some correspondence to you, this PR (Windows and Mac so far) lets you manage some stencil state. It's a bit unusual in that you make "display" objects with group "scope" to flip these states, but was intended to not to be too intrusive. The PR is a little messy since I apparently don't know how to properly do Git topic branches; anyhow, the commits starting from "Fumbling beginnings of custom commands..." make up the feature.
(Vlad suggested we probably need to figure out whether new changes like this will complicate adding certain platforms. I've been investigating this. It looks like it might indeed need a bit of tweaking, but I haven't yet done the deep dive and made a proof of concept for the particular target in question, despite some prep work.)
A lot of the depth state looks like the stencil state. If you look through some of the "custom command" files, I've even got those states stubbed out.
There is a z-coordinate in the Corona geometry, with a few notes suggesting some 3D experiments. (I mentioned in Slack earlier how this could be exploited in another way; furthermore I scribbled out some notes with assorted analysis on all this here.)
Would you be willing to do local (engine) builds to try stuff out?
Per-vertex is the only way that really makes sense.
Typically in an OpenGL application you would have "object space" coordinates: where all the vertices are in relation to the origin and oriented along the standard x, y, z directions. It's then common to map these into world space (rotating into different x, y, z; scaling; translating from the origin) and then un-map the result into the eye or camera's object space. (Model-view matrix.) The depth is found here and will be camera-relative, with some cutoff distance. (Projection matrix comes in here.)
With polygons and meshes it might make sense to just pass in (x, y, z) triples rather than (x, y) pairs, since you wouldn't need to torture the interface, just set some z flag somewhere. Not sure about other objects.
@sporkfin If the book is at all recent, you ought to be able to carry a lot of ideas over, if not the particular API calls. Depth buffers certainly aren't going away, for instance. (OpenGLES2 versus Metal will be a much wider gulf, on the other hand.)
@richard11 I made another local copy of that PR branch, and might attempt some baby steps with depth in the coming days. The GL state itself shouldn't be bad, but integrating with display objects and shaders will take some doing.
On that note, "the particular target in question" I alluded to above is Vulkan. I read through this a couple months ago plus a few other sources and have a rough idea of how to tackle it, but it's sort of all-or-nothing at first. (It's been a pretty punishing, motivation-sapping summer here, as well.) We'll see if I get anywhere... Anyhow, as Vlad has mentioned elsewhere, that would seem to pave the way for Metal as well, via MoltenVK.
With respect to OpenGL, a few things look like they'll remain similar; many definitely do not, though much of this will be internal. By the sounds of it, you will, on first use, get performance hitches for novel render state configurations (blending / shader / texture settings / depth / stencil / etc.) and so want to know everything you'll end up needing. Obviously I've yet to see how apparent this is in practice, but it seems the pipeline cache could end up as a binary asset you either ship or download, in games that wanted to really push and eke out Vulkan's benefits. This might dovetail with some workflows anyway, e.g. consoles wanting pre-built shaders.
Community Forum Software by IP.Board