Thanks for the answers! I see the problem with the GPU land.
Firstly, I am very glad that antialiasing actually made it! I suppose with time it's gonna be supported in the simulator as well.
#3 Yeah, that's why not critical. It would just simplify some things on developer's side.
#4 I think it's wise to use tables, because you can do transitions with them. And it's easy to store and restore all the table. Since not a whole lot of people are gonna use this feature my proposition is to have a function, that will open the table to Lua side. Like obj:getVerticesTable(), it might use some methametods to detect changes and update the object. And I am totally fine with :getVetrex() and :setVertex() if nothing better can be made.
Performance wise it is better to keep such table of vertices flat (without nesting), that's totally fine too.
Another option with table would be not to use metamethods but directly tell the object it has to update itself. Like obj:updateVertices()
#6 and #9 Ok, direct pixel access in GPU is hard. But I want to be able to prepare/generate images at runtime in Lua (CPU) and then be able to load such images into GPU. I don't believe this could be hard. Something like loadImageFromString() or FromTable(). There is gonna be a new object type - in memory bitmap. It's better to have the implementation in C rather than using Lua tables for big images. Although it will work too.
With that in mind I would love to make various manipulations on such images. Like copy region, paste, overlay, resize, crop, compute histogram, apply function to each pixel, transpose, fill, apply some built in filters like edge detection, contrast increase and so on. That would lead to convenient Digital Image Processing in corona (like OpenCV - look at https://github.com/marcoscoffier/lua---opencv). That would be really cool!
Such a big library for image manipulation should become a plugin for sure. Maybe even integrating OpenCV is not a bad idea. And releasing this plugin in opensource.
#8 I mean that I can get a *snapshot* of filtered image and use it as a base for next filter. Maybe load it into in memory bitmap in CPU land. Chaining filters.
Thanks for feedback! Please give use cases, as this will help us prioritize, not to mention help us verify we're talking about the same thing!
In terms of raster/texture/pixel stuff, the challenge we have is to make everything work in real-time on the GPU. This is very different from the old Flash days where all graphics was done with a general purpose CPU. A GPU pipeline is designed to optimize rendering to the screen, so in GPU-land, direct pixel accesses kill performance. In fact, this is one of the reasons why Flash performance is so bad on mobile.
With that in mind:
#3: That's on the list, but it falls under convenience functions. Today, you have the ability to create any polygon you want, including ones with smoothed corners.
#4: Still searching for the right API design here...
#5: Yes, this is in essence a generalization of snapshot objects, so we're not there yet.
#6: Render to texture is very different from the direct pixel access you are talking about. When you create new textures with rtt, the goal is to avoid memory accesses between CPU and GPU-land --- they are all in the GPU.
#7: Custom filters would be done via a filter kernel written in GLSL, not via direct pixel control.
#8: I'm not sure what you mean. All filter effects are rasterized in real time.
#9: If you mean build textures directly from display objects, then yes. And in some ways, this may let you achieve many of the things you might otherwise want with direct pixel control.