Unreal Engine 4 AI Programming Essentials
上QQ阅读APP看书,第一时间看更新

Techniques and practices of game AI

There exist many techniques to cover different aspects in game AI, from fundamental movement to advanced environment sensing and decision making. Let's look at them one by one.

Navigation

Navigation for AI is usually built up of the following tools:

  • Navigation Mesh: Using tools such as Navigation Mesh, also known as NavMesh, you can designate areas in which AI can traverse. NavMesh is a simplified polygonal representation of a level (the green region in the following screenshot), where each polygon acts as a single node connected to its nearby ones. Usually, this process is automated and doesn't require designers to place nodes manually. Using special tools in Unreal, they analyze the geometry of the level and generate the most optimized Navigation Mesh accordingly. The purpose, of course, is to determine the playable areas in the level by the game agents. Note that this is the only path-finding technique available; we will use NavMesh in the examples provided in this book because it works well in this demonstration.
  • Path Following (Path nodes): A similar solution to NavMesh, Path nodes can designate the space in which the AI traverses:
  • Behavior Tree: Using Behavior Tree to influence your AI's next destination can create a more varied player experience. It not only calculates its requested destination, but also decides whether it should enter the screen with a cart wheeling double-back flip, no hands, or the triple somersault and jazz hands.
  • Steering behaviors: Steering behaviors affect the way AI moves while navigating to avoid obstacles. This also means using Steering to create formations with your fleets that you have set to attack the king's wall. Steering can be used in many ways to influence the movement of the character.
  • Sensory systems: Sensory systems can provide critical details, such as the nearby players, sound levels, nearby cover, and many other variables of the environment that can alter movement. It's critical that your AI understands the changing environment so that it doesn't break the illusion of being a real opponent.

While all these components aren't necessary to achieve AI navigation, they all provide critical feedback, which can affect the navigation. Navigating within a world is limited only by pathways within the game. We can see an example of group behavior with several members following a leader here:

Achieving realistic movement with Steering

When you think of what Steering does for a car, you would be right to imagine the same idea applied to game AI navigation. Steering influences the movement of AI as it goes to its next destination. The influences can be supplied as necessary, but we will go over the most commonly used. Avoidance is used to essentially avoid colliding with oncoming AI. Flocking is another key factor in steering and is useful in simulating interesting group movement, such as a complete panic situation, or a school of fish. The goal of Steering behaviors is to achieve realistic movement and behavior within the player's world.

Creating a character with randomness and probability

AI with character is what randomness and probability add to the bot's decision making abilities. If a bot attacked you in the same way, always entered the scene in the same way, and annoyed you with its laugh after every successful hit, it wouldn't make for a unique experience. Using randomness and probability, you can instead make the AI laugh based on probability or introduce randomness to the AI's skill of choice. Another great by-product of applying randomness and probability is that it allows you to introduce levels of difficulty or lower the chance of missing the skill cast, and even allows bots to aim more precisely. If you have bots who wander around looking for enemies, probability, and randomness could be used to work with the bot's sensory input to make a more rational decision.

Creating complex decision making with Behavior Tree

Finite State Machines (FSM) is a model to define how a finite number of states transit among each other. For example, this allows it to go from gathering to searching and then attacking, as shown in the following image. Behavior trees are similar, but they allow more flexibility. A behavior tree allows hierarchical FSM, which introduces another layer of decisions. So, the bot decides among branches of behaviors that define the state it is in. There is a tool provided by UE4 called Behavior Tree. This editor tool allows us to modify AI behavior quickly and with ease.

Here's a diagram of the FSM model:

Let's take a look at the components of Behavior Tree:

Now, we will discuss the components found within UE4 Behavior Tree.

Root

This node is the beginning node that sends the signal to the next node in the tree. This connects to a composite, which begins your first tree. What you may notice is that you are required to use a composite first to define a tree and then to create a task for this tree. This is because hierarchical FSM creates branches of states. These states will be populated with other states or tasks. This allows an easy transition among multiple states. You can see what a root node looks like as shown in the following screenshot:

Decorators

Decorators are conditional statements (the blue part on top of a node) that control whether or not a branch in the tree or even a single node can be executed. I used a decorator in the AI we will make to tell it to update to the next available route.

In the following image, you can note the Attack & Destroy decorator that defines the state on top of the composite. This state includes two tasks, Attack Enemy and Move To Enemy, which also has a decorator telling it to execute only when the bot state is Search:

In the preceding screenshot, you can note the Attack & Destroy decorator that defines the state on top of the composite. This state includes two tasks, Attack Enemy and Move To Enemy, which also has a decorator telling it to execute only when the bot state is Search.

Composites

These are the beginning points of the states. They define how the state will behave with returns and execution flow. They have three main types: Selector, Sequence, and Simple Parallel. This beginning branch has a conditional statement, if the state is equal or greater than Search state:

Selector executes each of its children from left to right and doesn't fail; however, it returns success when one of its children returns success. So, this is good for a state that doesn't check for successfully executed nodes. The following screenshot shows an example of Selector:

Sequence executes its children in a similar fashion to Selector but returns fail when one of its children returns fail. This means that it's required that all nodes return success to complete the sequence. You can see a Sequence node in the following screenshot:

Last but not least, Simple Parallel allows you to execute a task and a tree essentially at the same time. This is great for creating a state that requires another task to always be called. To set it up, you need to first connect it to a task that it will execute. The second task or state connected continues to be called with the first task until the first task returns success.

Services

Services run as long as the composite it is added to stays activated. They tick at the intervals you set within the properties. They have another float property called Tick Interval that allows you to control how often this service is executed in the background. Services are used to modify the state of AI in most cases because it's always called. For example, in the bot that we will create, we will add a service to the first branch of the tree so that it's called without interruption and will be able to maintain the state that the bot should be in at any given movement. The green node in the following screenshot is a service with important information explicitly:

This service, called Detect Enemy, actually runs a deviating cycle that updates Blackboard variables such as State and Enemy Actor.

Tasks

Tasks do the dirty work and report success or failed if it's necessary. They have blueprint nodes that can be referred to in Behavior Tree. There are two types of nodes that you'll use most often when working with Task: Event Receive Execute, which receives the signal to execute the connected scripts, and Finish Execute, which sends the signal back and returns true or false on success. This is important when making a task meant for the Sequence composite node.

Blackboard

A Blackboard is an asset to store the variables to be used within the AI Behavior Tree. They are created outside Behavior Tree. In our example, we will store an enumeration variable for the state in the State, EnemyActor object to hold the currently targeted enemy, and Route to store the current route position that the AI is requested to travel to, just to name a few. You can see all current variables as keys in Blackboard panel as follows:

They work just by setting a public variable of a node to one of the available Blackboard variables in the drop-down menu. The naming convention in the following screenshot makes this process streamlined:

Sensory systems

A sensory system usually consists of several modules, such as sight, sound, and memory, to help the AI capture information about the environment. A bot can maintain the illusion of intelligence using sounds within their environment to make a deliberate risk assessment before engaging a hazardous threat or aiding a nearby teammate who is calling for help. The use of memory will allow the bot to avoid an area where it remembers seeing a severe threat or rush back to an area where it last saw its group. Creating a sensory system in the case of an enemy player is heavily based on the environment where the AI fights the player. It needs to be able to find cover, evade the enemy, get ammo, and other features that you feel create immersive AI for your game. A game with AI that challenges the player creates a unique inpidual experience. A good sensory system contributes critical information that makes for reactive AI. In this project, we will use the sensory system to detect the pawns that the AI can see. We will also use functions to check for the line of sight of the enemy. We will check whether there is another pawn in the way of our path. We can check for cover and other resources within the area.

Machine learning

Machine learning is a branch on its own. This technique allows AI to learn from situations and simulations. Inputs are taken from the environment, including the context in which the bot allows it to make decisive actions. In machine learning, the inputs are put within a classifier that can predict a set of outputs with a certain level of certainty. Classifiers can be combined into ensembles to increase the accuracy of probabilistic prediction. We won't dig deep into this subject, but there exist a vast amount of resources for studying machine learning, ranging from text books (Pattern Recognition and Machine Learning by Christopher M. Bishop, Springer) to online courses (Machine Learning on coursera.org).

Tracing

Tracing allows another actor within the world to detect objects by ray tracing. A single line trace is sent out, and if it collides with an actor, the actor is returned along with information on the impact. Tracing is used for many reasons; one way it is used in FPS is to detect hits. Are you familiar with the hit box? When your player shoots in a game, a trace is shot out that collides with the opponent's hit box, determining the damage to the player, and if you're skillful enough, it results in death. Other shapes available for traces, such as spheres, capsules, and boxes, allow tracing for different situations. Recently, I used Box Trace for my car to detect objects near it.

Influence Mapping

Influence Mapping isn't a finite approach; it's the idea that specific locations on the map would be attributed information that directly influences the player or AI. An example of using Influence Mapping with AI is presence falloff. Let's say we have other enemy AI in a group; their presence map would create a radial circle around the group with the intensity based on the size of the group. This way, the other AI knows by entering this area that they're entering a zone occupied by other enemy AI.

Practical information isn't the only thing people use it for, so just understand that it's meant to provide another level of input to help your bot make more additional decisions. As shown in the following image, different colors represent zones occupied by different types of AI, and color intensity indicates the influence with respect to each AI character:

Practical information isn't the only thing people use it for, so just understand that it's meant to provide another level of input to help your bot make more additional decisions.