Autonomous Agent Movement Part 4: Triumph

Posted On: 2022-05-02

By Mark

The fourth and final entry in my Autonomous Agent Movement (aka. AAM) series is aimed at providing practical, concrete advice on how to overcome the various challenges associated with AAM, while still being simple to implement. While much of this is focused on my own experiences solving these problems in Unity, the underlying ideas should be applicable regardless of what engine/framework you use.

Use Others' Pathfinding Solutions

AAM is generally made up of two main parts: the pathfinding and the path following. As I've mentioned in previous entries, there are plenty of off-the-shelf pathfinding tools, and these are particularly useful if you're dealing with complex/large environments. I've found that the (simplistically named) AStar Pathfinding Unity asset provides many of the features I need, while being far faster and more stable than anything I built on my own. In particular, it provides high-performance (grid-based) graph recalculation, which means that players/Agents can alter the environment (ie. digging tunnels), and the pathfinding will automatically adjust.

In choosing to use an off-the-shelf approach, I also had to sacrifice some of the features I was looking for in the solution - forcing me to work around/ignore anything that didn't fit well with the asset's design. That primarily meant giving up on the pathfinding representing an Agent's state (ie. jumping/gravity), as the asset (like most off-the-shelf A-Star implementations) assumes that a node will always have the same connections, regardless of how the Agent arrived there. To deal with this, I chose to implement my workaround at the path following level: not all paths will be possible (due to state constraints), so I'm implementing a secondary, backup movement system that should be able to follow nearly any path* by using flight.

Path Following Depends On Your Project

There are two main ways to implement the actual path following: direct translation and (what I call) puppeting. Directly translating is guaranteed to follow the path - it literally moves the agent along the path, ignoring any other systems/obstacles/etc. It is also quite common: a lot of pathfinding tools include simple scripts to do this, so it's easy to get something up and running this way. Puppeting, by contrast, involves a bit of abstraction: rather than directly altering the agent's position, a set of instructions (ie. move left) are sent to the agent, and the agent uses the project's existing systems to execute on that. This approach can be much more dynamic, as systemic interactions can alter agent movement (ie. the agent can be pushed away from the path by external forces).

Since the movement systems are core to my project, I am aiming to use the puppeting approach for both the stateful and flight path following. I'd long ago prepared for this, and so my input system (ie. player controlling an agent via keyboard) can easily be swapped out for a programmable one (ie. AI controlling an agent via a script). This has worked quite well so far, but it comes with an additional layer of complexity that the agent movement needs to be aware of how other systems may affect its movement*.

Being Smart Can Be Hard

Making the flight path follower was pretty straight-forward: get the direction from the current point to the next step in the path, and set that to be the agent's intent. Since the path is guaranteed to be possible, the follower doesn't need a lot of smarts*. The stateful path follower, by contrast, is a lot more complicated - it has to account for inertia and gravity, as well as limited jump height. Most important of all, however, it needs to know when to give up: some paths are impossible, so it needs to accurately decide when to stop trying and start flying. As such, I've found the "smarter" of the two is eating up quite a bit more development time, but, since I can always fall back to the flying path follower, I know the agent will always reach its destination somehow.

There's More After Movement

Once both the path finding and path following are working, one need only tell the agents where to go, and they'll carry out the action. So far I've largely done this via testing scripts (ie. move towards a specific object's location), but a complete Autonomous Agent solution will want some kind of AI decision-making system to flesh that out. This can be (and often is) accomplished using behavior trees - and there are plenty of off-the-shelf tools for those. While this is something I intend to cover in another blog post, it won't be as a part of this series; I haven't even started work this in my own project*, and this series has run long enough as it is.

Conclusion

For now, I will wrap up the AAM series here. I've covered what Autonomous Agent Movement is, why you'd want to use it, the challenges some projects face while trying to use it, and how, through a mix of compromise and workarounds, I've overcome or pushed past the challenges that obstruct my current project. As I hope I've made clear in this post, using third-party tools can be key if you've got multiple challenges stacked against a project: there's a lot that goes into each individual solution, so even if there's no perfect fit for your project, starting from something that mostly works and bringing a healthy tolerance for compromise might be what it takes. After all, as the saying goes, perfect is the enemy of good (no matter how frustrating that is.)