Obstacle detection for environmental movement actions (climbing, wall jumping, etc.) #63
Replies: 4 comments 8 replies
-
Option: fully manualThe user control system will have to query the physics backend directly, without Tnua's assistance, and find the opportunities by itself. Since the data that needs to be passed to the action is relatively small and easy to get when you query the physics backend yourself - users could do it without assistance from Tnua. With the general design I have in mind, this will be an option even if I do implement one of the other options - and I suspect there will always be users who prefer to do that detection themselves because they need that control. This is here to discuss the particulars of this method (not that there is too much to discuss about them...) and also to consider the option of not adding a more elaborate obstacle detection assistance - at least not initially - and let users do it themselves. |
Beta Was this translation helpful? Give feedback.
-
Option: wrappers around the physics backends' interfacesTnua's main crate will define a trait for ensuring a consistent interface, but each physics backend integration crate will its own User control systems will use that wrapper and query the obstacles from it. |
Beta Was this translation helpful? Give feedback.
-
Option: spatial ("radar"?) sensorsEntity setup code will add a The sensor probably shouldn't store the full geometry of each entity. Instead, it should save a "projection" of it into a simple shape (rectangle?) that the user input system can work with. Need to decide exactly which shape. |
Beta Was this translation helpful? Give feedback.
-
Option: Combine a radar component with user-controls-system triggered extra queriesThis is a combination of the two other ideas:
The reason I want to use the fill the radar in a system is that the shape intersections method in both Rapier and Avian is accepting a predicate, which means I won't be able to work with an And even if they were Once we get an entity, the other queries will (probably) all have a simple non-generic interface. |
Beta Was this translation helpful? Give feedback.
-
Environmental movement actions (like #7, #12 and #54) are a very desired feature. These are actions (or sometimes basis) that are performed against some object in the environment - which we'll call "obstacle". Unlike ground actions (like jumping or crouching), which always know what the ground is without the user control system telling them, obstacle actions need to be told what the obstacle is.
I already have a very raw idea about how they should work:
Something queries the physics backend to find "opportunities" for environmental actions (ladders to climb on, walls to jump from, ledges to hang from, etc.)
The user control system considers these opportunities, together with the controller state and the user input, and decides on an environment action to perform.
The user control system generates environment action (or maybe a basis for things like climbing? But this is a different discussion) using data from the opportunity and feeds it to the controller.
This third step has a very important key aspect - how does the action know about the obstacle it works with?
I think that in most cases it should be enough for the action to know:
There could be variations, of course, but I think these three pieces of data should be enough to perform the action.
This discussion is about the "Something" from the first step. How will the user control system know about the obstacles the character may interact with?
Beta Was this translation helpful? Give feedback.
All reactions