Jump to content

[TOPIC: topicViewTemplate]
[GLOBAL: userSmallPhoto]
Photo

OpenGL depth testing
Started by richard11 Aug 08 2019 03:04 PM

7 replies to this topic
[TOPIC CONTROLS]
[/TOPIC CONTROLS]
[modOptionsDropdown]
[/modOptionsDropdown]
[reputationFilter]
[TOPIC: post.html]
#1

richard11

[GLOBAL: userInfoPane.html]
richard11
  • Contributor

  • 473 posts
  • Corona SDK

I don't particularly speak C++ or OpenGL, but I've been curiously looking over some of the Corona source to see if it would be feasible to extend the newPolygon() implementation to do pixel depth testing. My thinking is that if when defining the vertices of a polygon, you can also (optionally of course) pass a z value for each vertex, then the actual OpenGL render of that polygon should be able to depth test each pixel against previously drawn polygons and skip the rendering of individual pixels that would be behind those already drawn. Long story short, this would allow my 3D engine to leave the handling of intersecting faces up to OpenGL which would be hugely advantageous for performance.

To the point, my digging has determined that the OpenGL term for this is a "polygon_offset", and I've just stumbled onto this file within the Corona source which appears to hint at an implementation of some sort?

https://github.com/coronalabs/corona/blob/83ef32d39e658af276daaa30bfcac4e04f324341/external/glew/Project/auto/extensions/gl/GL_EXT_polygon_offset

Am I barking up the wrong tree, or is depth testing already actually incorporated, at least to some extent? Perhaps the renderer is capable of this but the newPolygon call doesn't currently use it, or does but doesn't expose the capability to Lua?

If somebody more competent than me doesn't mind doing a little digging themselves I'd appreciate the feedback. Or if somebody at Coronalabs could explain how this works?

Much appreciated.

[TOPIC: post.html]
#2

StarCrunch

[GLOBAL: userInfoPane.html]
StarCrunch
  • Contributor

  • 817 posts
  • Corona SDK

 

Yes.  :D Or at least, polygon offsets are rather unimportant as far as this stuff goes. They come into play with stuff like "decals": stickers, graffiti, etc. that might coincide with polygons but exist apart from their texturing and so on.

 

As I also mentioned in some correspondence to you, this PR (Windows and Mac so far) lets you manage some stencil state. It's a bit unusual in that you make "display" objects with group "scope" to flip these states, but was intended to not to be too intrusive. The PR is a little messy since I apparently don't know how to properly do Git topic branches; anyhow, the commits starting from "Fumbling beginnings of custom commands..." make up the feature.

 

(Vlad suggested we probably need to figure out whether new changes like this will complicate adding certain platforms. I've been investigating this. It looks like it might indeed need a bit of tweaking, but I haven't yet done the deep dive and made a proof of concept for the particular target in question, despite some prep work.)

 

A lot of the depth state looks like the stencil state. If you look through some of the "custom command" files, I've even got those states stubbed out.

 

There is a z-coordinate in the Corona geometry, with a few notes suggesting some 3D experiments. (I mentioned in Slack earlier how this could be exploited in another way; furthermore I scribbled out some notes with assorted analysis on all this here.)

 

Would you be willing to do local (engine) builds to try stuff out?



[TOPIC: post.html]
#3

richard11

[GLOBAL: userInfoPane.html]
richard11
  • Contributor

  • 473 posts
  • Corona SDK

I was quietly hoping you'd jump onto this thread, ha 😏

Unfortunately although from our previous conversation I loosely follow what you're doing, I'm just not OpenGL-savvy enough to understand very much of it. Would your approach allow for individual pixel depths to be calculated and masked out based on vertex-defined depths? Or would the entire polygon be given one depth value as a display object?

[TOPIC: post.html]
#4

StarCrunch

[GLOBAL: userInfoPane.html]
StarCrunch
  • Contributor

  • 817 posts
  • Corona SDK

Per-vertex is the only way that really makes sense.

 

Typically in an OpenGL application you would have "object space" coordinates: where all the vertices are in relation to the origin and oriented along the standard x, y, z directions. It's then common to map these into world space (rotating into different x, y, z; scaling; translating from the origin) and then un-map the result into the eye or camera's object space. (Model-view matrix.) The depth is found here and will be camera-relative, with some cutoff distance. (Projection matrix comes in here.)

 

With polygons and meshes it might make sense to just pass in (x, y, z) triples rather than (x, y) pairs, since you wouldn't need to torture the interface, just set some z flag somewhere. Not sure about other objects.



[TOPIC: post.html]
#5

richard11

[GLOBAL: userInfoPane.html]
richard11
  • Contributor

  • 473 posts
  • Corona SDK

That was exactly my thinking, and in fact that's exactly how my engine works. Objects have a location in world space and their vertices have a location in object space, relative to that world location. Objects also have a scale and a rotation. The camera too has a location in world space and a rotation itself. Then the renderer iterates over the objects and constructs faces from those vertices, calculating their true screen co-ordinates after scale and rotation offsets and relative to the camera location and rotation. Finally, those faces are drawn as individual polygons.

I've been considering implementing my own depth buffer to then handle intersecting polygons. I already have z's for each vertex relative to the screen, so it'd technically be possible to render the polygon to a texture, iterate the rendered pixels and calculate depths based on the vertex depths, then compare those depths against an array of previously calculated depths and either update the array if the new pixel is closer, or render a dot to a new mask texture if the array value is closer. Finally the polygon and it's mask could be copied over to screen space.

This approach would I'm sure be slow though, compared to leaving OpenGL to do the same. For a start I'm guessing OpenGL does this at hardware level and with a single render pass, rather than drawing everything and then poking holes. Hence why I'm now thinking that if the polygon drawing code in Corona itself could just be extended to allow passing those vertex z's, it could in turn pass over to OpenGL for depth testing at that level instead.

[TOPIC: post.html]
#6

sporkfin

[GLOBAL: userInfoPane.html]
sporkfin
  • Contributor

  • 593 posts
  • Corona SDK

Hey Richard,

 

Apple is deprecating OpenGL so you might want to see what Corona's workaround is (which GL they will use in the future) before you invest too heavily in OpenGL.  I bought a huge OpenGL book the same week Apple announced it was moving to Metal.



[TOPIC: post.html]
#7

StarCrunch

[GLOBAL: userInfoPane.html]
StarCrunch
  • Contributor

  • 817 posts
  • Corona SDK

@sporkfin If the book is at all recent, you ought to be able to carry a lot of ideas over, if not the particular API calls. Depth buffers certainly aren't going away, for instance. (OpenGLES2 versus Metal will be a much wider gulf, on the other hand.)

 

@richard11 I made another local copy of that PR branch, and might attempt some baby steps with depth in the coming days. The GL state itself shouldn't be bad, but integrating with display objects and shaders will take some doing.

 

On that note, "the particular target in question" I alluded to above is Vulkan. I read through this a couple months ago plus a few other sources and have a rough idea of how to tackle it, but it's sort of all-or-nothing at first.  :D (It's been a pretty punishing, motivation-sapping summer here, as well.) We'll see if I get anywhere... Anyhow, as Vlad has mentioned elsewhere, that would seem to pave the way for Metal as well, via MoltenVK.

 

With respect to OpenGL, a few things look like they'll remain similar; many definitely do not, though much of this will be internal. By the sounds of it, you will, on first use, get performance hitches for novel render state configurations (blending / shader / texture settings / depth / stencil / etc.) and so want to know everything you'll end up needing. Obviously I've yet to see how apparent this is in practice, but it seems the pipeline cache could end up as a binary asset you either ship or download, in games that wanted to really push and eke out Vulkan's benefits. This might dovetail with some workflows anyway, e.g. consoles wanting pre-built shaders.



[TOPIC: post.html]
#8

richard11

[GLOBAL: userInfoPane.html]
richard11
  • Contributor

  • 473 posts
  • Corona SDK

I'd actually forgotten I'd started this topic! I'd also forgotten about the OpenGL deprecation and am now glad I didn't invest too much time into this, but equally I'd be hugely interested in what you come up with and it sounds like you'd be more than capable of porting your implementation once Corona has replaced the OpenGL wrapping, so presumably your work won't be a waste...


[topic_controls]
[/topic_controls]