I'm prototyping a Point and Click game, using it to learn the capabilities of the Corona SDK and what parts I need to make a PnC game.
I can load 'scene' images. I have a 'walking' sprite I borrowed and have coded the ability walk in all 8 directions depending on where you 'touch' the image.
Now I'm struggling a bit on setting/detecting boundaries to restrict where the sprite can go.
In PnC games you want the character to move to a 'touch' point unless there is something like a tree, or house or 'portal' to somewhere else. You also want to allow the character to walk 'behind' objects like trees and keep walking.
I thought I could add polygons (invisible) to restrict the movement using the physics engine. I made the sprite 'dynamic' with 'isFixedRotation' = true. I made the red polygon in the picture 'static'. I will adjust the right side of the polygon to be just to the left of the stairs on the image. The walker would be able to walk 'up/down' the stairs but when it reaches the polygon it stops automagically because of the engine. I would do the same on the right side of the stairs, but left that out for clarity now.
The problem I'm finding is that the boundary 'condition' is actually the vertical edge of the polygon that is 'most right', regardless of the rest of the polygon.
What I'm showing in the picture is that the sprite has 'hit' the polygons edge and will go no further.
1. Is this approach valid/appropriate for boundary detection as I wish to use it?
2. What am I missing on the actual boundary exhibited by the polygon?
Thanks for any help!