Jump to content

[TOPIC: topicViewTemplate]
[GLOBAL: userSmallPhoto]
Photo

OpenGL depth testing
Started by richard11 Aug 08 2019 03:04 PM

4 replies to this topic
[TOPIC CONTROLS]
[/TOPIC CONTROLS]
[modOptionsDropdown]
[/modOptionsDropdown]
[reputationFilter]
[TOPIC: post.html]
#1

richard11

[GLOBAL: userInfoPane.html]
richard11
  • Contributor

  • 448 posts
  • Corona SDK

I don't particularly speak C++ or OpenGL, but I've been curiously looking over some of the Corona source to see if it would be feasible to extend the newPolygon() implementation to do pixel depth testing. My thinking is that if when defining the vertices of a polygon, you can also (optionally of course) pass a z value for each vertex, then the actual OpenGL render of that polygon should be able to depth test each pixel against previously drawn polygons and skip the rendering of individual pixels that would be behind those already drawn. Long story short, this would allow my 3D engine to leave the handling of intersecting faces up to OpenGL which would be hugely advantageous for performance.

To the point, my digging has determined that the OpenGL term for this is a "polygon_offset", and I've just stumbled onto this file within the Corona source which appears to hint at an implementation of some sort?

https://github.com/coronalabs/corona/blob/83ef32d39e658af276daaa30bfcac4e04f324341/external/glew/Project/auto/extensions/gl/GL_EXT_polygon_offset

Am I barking up the wrong tree, or is depth testing already actually incorporated, at least to some extent? Perhaps the renderer is capable of this but the newPolygon call doesn't currently use it, or does but doesn't expose the capability to Lua?

If somebody more competent than me doesn't mind doing a little digging themselves I'd appreciate the feedback. Or if somebody at Coronalabs could explain how this works?

Much appreciated.

[TOPIC: post.html]
#2

StarCrunch

[GLOBAL: userInfoPane.html]
StarCrunch
  • Contributor

  • 814 posts
  • Corona SDK

 

Yes.  :D Or at least, polygon offsets are rather unimportant as far as this stuff goes. They come into play with stuff like "decals": stickers, graffiti, etc. that might coincide with polygons but exist apart from their texturing and so on.

 

As I also mentioned in some correspondence to you, this PR (Windows and Mac so far) lets you manage some stencil state. It's a bit unusual in that you make "display" objects with group "scope" to flip these states, but was intended to not to be too intrusive. The PR is a little messy since I apparently don't know how to properly do Git topic branches; anyhow, the commits starting from "Fumbling beginnings of custom commands..." make up the feature.

 

(Vlad suggested we probably need to figure out whether new changes like this will complicate adding certain platforms. I've been investigating this. It looks like it might indeed need a bit of tweaking, but I haven't yet done the deep dive and made a proof of concept for the particular target in question, despite some prep work.)

 

A lot of the depth state looks like the stencil state. If you look through some of the "custom command" files, I've even got those states stubbed out.

 

There is a z-coordinate in the Corona geometry, with a few notes suggesting some 3D experiments. (I mentioned in Slack earlier how this could be exploited in another way; furthermore I scribbled out some notes with assorted analysis on all this here.)

 

Would you be willing to do local (engine) builds to try stuff out?



[TOPIC: post.html]
#3

richard11

[GLOBAL: userInfoPane.html]
richard11
  • Contributor

  • 448 posts
  • Corona SDK

I was quietly hoping you'd jump onto this thread, ha 😏

Unfortunately although from our previous conversation I loosely follow what you're doing, I'm just not OpenGL-savvy enough to understand very much of it. Would your approach allow for individual pixel depths to be calculated and masked out based on vertex-defined depths? Or would the entire polygon be given one depth value as a display object?

[TOPIC: post.html]
#4

StarCrunch

[GLOBAL: userInfoPane.html]
StarCrunch
  • Contributor

  • 814 posts
  • Corona SDK

Per-vertex is the only way that really makes sense.

 

Typically in an OpenGL application you would have "object space" coordinates: where all the vertices are in relation to the origin and oriented along the standard x, y, z directions. It's then common to map these into world space (rotating into different x, y, z; scaling; translating from the origin) and then un-map the result into the eye or camera's object space. (Model-view matrix.) The depth is found here and will be camera-relative, with some cutoff distance. (Projection matrix comes in here.)

 

With polygons and meshes it might make sense to just pass in (x, y, z) triples rather than (x, y) pairs, since you wouldn't need to torture the interface, just set some z flag somewhere. Not sure about other objects.



[TOPIC: post.html]
#5

richard11

[GLOBAL: userInfoPane.html]
richard11
  • Contributor

  • 448 posts
  • Corona SDK

That was exactly my thinking, and in fact that's exactly how my engine works. Objects have a location in world space and their vertices have a location in object space, relative to that world location. Objects also have a scale and a rotation. The camera too has a location in world space and a rotation itself. Then the renderer iterates over the objects and constructs faces from those vertices, calculating their true screen co-ordinates after scale and rotation offsets and relative to the camera location and rotation. Finally, those faces are drawn as individual polygons.

I've been considering implementing my own depth buffer to then handle intersecting polygons. I already have z's for each vertex relative to the screen, so it'd technically be possible to render the polygon to a texture, iterate the rendered pixels and calculate depths based on the vertex depths, then compare those depths against an array of previously calculated depths and either update the array if the new pixel is closer, or render a dot to a new mask texture if the array value is closer. Finally the polygon and it's mask could be copied over to screen space.

This approach would I'm sure be slow though, compared to leaving OpenGL to do the same. For a start I'm guessing OpenGL does this at hardware level and with a single render pass, rather than drawing everything and then poking holes. Hence why I'm now thinking that if the polygon drawing code in Corona itself could just be extended to allow passing those vertex z's, it could in turn pass over to OpenGL for depth testing at that level instead.


[topic_controls]
[/topic_controls]