The worst case scenario is when one player is just acting like a jerk. The social contract gives you great power, and with it comes great responsibility. No bullying in your game. If the belligerent player refuses to back down, pause the game until things have a chance to cool off. Without NPCs, most plots would be impossible.
In an ideal world, these loadbearing NPCs would never die before the plot demanded it, but roleplaying games rarely work that way. Sometimes you get careless, or the PCs are super determined to eliminate a villain you planned to use later. Villains are the obvious example, but it can happen with critical mentor, ally, or foil characters as well.
Lord of the Rings would have some serious problems if Gandalf was killed by Saruman in the first book. So what do you do when Olbate the Terrible, Warlord of the mountain kingdoms, takes an unexpected arrow to the throat? The first option is to deploy an emergency backup NPC. Did Olbate have a spouse, a sibling, or a really good friend? Someone solidly connected to him who might be able to step in?
An especially clever GM may have already set up such a connection in advance, but for us mere mortals, even an offhand comment will suffice. Give the replacement NPC something that easily distinguishes them from their predecessor. This can be as simple as contrasting mannerisms. If Olbate was cool and reserved, his widow should be loud and boisterous. This will feel less artificial, even if the widow is following the exact same plan.
This method is a bit trickier for friendly NPCs. Players can easily accept a replacement villain as someone they have to fight, but it will be very strange if a long time mentor is suddenly replaced by their sister. When a friendly NPC unexpectedly buys the farm, it can be better to play out how the PCs feel about the loss than to attempt a replacement.
Instead use their death to take the plot in an unexpected direction. Is it really your fault they die so often? You might even have a player who insists you kill their character when the dice say to.
Stranger things have happened. When a PC dies unexpectedly, it creates a whole host of problems. All the plot threads you weaved for them are left hanging. The player will be discouraged if they liked their character at all. And now you have to figure out how to insert a replacement. Fear not, there are solutions. The first is quite similar to what you would do at the loss of an NPC. If the character had any serious connections , they can serve as a good replacement.
This works especially well if the connection was an ally or mentor the deceased character paid points for. If the character represented any kind of larger group, be it clan or corporation, they can be another source for the replacement. Or maybe the player never established any such connections. Finally, the pace of first person shooter games is higher than in most other game genres, necessitating quicker decision making and use of heuristics.
For some games, it is claimed that more sophisticated agent technologies are used, but this is difficult to verify because most games are not open source. Most games also have no publications from the creators and third party publications are often inconsistent.
So, in this paper, we use a rather old game Quake III , but we believe very prototypical for this approach, to illustrate our point, because it was the only open source game that we could access and thus reliably discuss.
In order to get a better understanding of the state-of-the-art approaches, we will analyze Quake III as a prototypical example of the server-side approach and one of the few of which the code is open source and thus inspectable and Gamebots as a good example of the client-side approach.
For both approaches, the analysis will be made according to three major aspects: synchronization, information representation, and communication.
In Quake III, the agents are completely integrated in the default game loop in the same way as the physics engine, the animation engine, and rendering engine. Direct method calls can be used for many different decision-making processes, for example, hard coding approaches, directly specifying what to return with a certain input; fuzzy logic, mapping the right output to a certain set of input variables; or finite state machines, identifying the situation the agent is in and executing the corresponding method call.
Independent of the particular decision-making strategy, the whole process is completely synchronized. This limits the complexity of behavior because in a synchronized process all decisions have to be made within one time step, and complex decisions would slow down a game too much. Figure 1 gives an impression of the implementation of agents in Quake III. On the lowest level in the figure, a translation from the raw engine data to a representation more suitable for agents has been created, called the area awareness system AAS.
The heart of the AAS is a special 3D representation of the game world that provides all the information relevant to the bot. The AAS informs the bot about the current state of the world, including information relevant to navigation, routing, and other entities in the game. The information is formatted and preprocessed for fast and easy access and usage by the bot. For instance, to navigate the bot receives information from the AAS about the locations of all static items, and it can ask the AAS whether a certain location is reachable.
The AAS is responsible for route planning. The first level also executes the actual actions of the agent and facilitates the decision process of the agents. However, the agents are highly dependent on the data they can extract from the AAS, for example, an agent cannot decide to take another route to a certain item.
Depending on the character that a bot plays, the fuzzy logic control determines which of the possible paths the bot should start navigating. Little communication between agents takes place in a normal game of Quake III; it is only used to assign roles in team play situations. Communication is implemented by using the chat system for sending simple text messages. More cooperation between agents would require improved communication facilities.
Moreover, currently it is assumed that communication is always successful, which is usually not guaranteed in realistic multiagent scenarios. Gamebots [ 8 ] has been created as a research platform for making the connection between agent research and a computer game, namely, the Unreal Tournament environment, and is one of the most used client-side implementations. In client-side approaches, agents are running as completely separate programs from the server and are usually communicating through network sockets.
Network communication between agents and other external software programs has been successfully used in other multiagent systems. Gamebots was designed for educational purposes, and therefore, multiple client implementations have been created, for example, one using the scripting language TCL, a SOAR bot, and a JAVA-based implementation.
Information is sent from the game engine to the agents through the Gamebot and JAVAbot APIs by two types of messages: synchronous and asynchronous messages. The synchronous messages are sent at a configurable interval.
They provide information about the perceptions of the bot in the game world and a status report of the bot itself. Asynchronous messages are used for events in the game that occur less often and are directly sent to the agent but do not interrupt the large synchronous message. Gamebots is actually not a pure client-side solution because the server is also modified to supply a special world representation to the bot. There are some pure client agent implementations, but they are usually only created for cheating purposes.
In this case, the processing of the data is done in the bot itself because it pretends to be a human client game. Doing this filtering on the server is more efficient because only the useful information needs to be communicated.
This clarifies why the Gamebot API is specific for Unreal Tournament; it needs to know the internal representation of the game world in order to make the translation efficiently. The Gamebot API does not provide information about the complete environment, but only about objects that are perceivable by the bot.
Thus, if a bot wants to gather information about the complete environment, it has to physically explore it. To navigate, for example, the agent receives information about predefined navigation nodes in the game map, but only the currently observable nodes are returned. The agent thus does not know what exists around the corner, let alone that it can reason about it. Due to the representation choices made in Gamebots, information about the environment has to be stored at the agent side of the system.
For complex bots, the information provided by the Gamebot API quickly becomes too limited to make intelligent decisions.
For example, the agent cannot know the spawning location of a certain power-up, and therefore, it cannot plan to go there. There is no facility for communication between agents in the Gamebots API because Gamebots was not designed for adding a multiagent system with interacting agents to the game. It is allowed to add multiple agents to one game, but there are no facilities for direct interaction between these agents.
It is possible to create a separate communication system between the agents by bypassing the API and the engine.
However, this solution is not only inelegant, but also restrains the game environment to have any influence on the communication. An advantage of separating the game engine and the agents in different processes is that there are no strict time limits on the reasoning process of the agents. A disadvantage of using a fixed API is that the agent receives information it does not need and it cannot access information that it might need.
In the previous paragraph, we have seen two examples of ways to connect agents to games. These approaches are limited to a technical way of connecting agents to a game.
On the level of game design, few games have tried to leverage these approaches from the start of the game design to add multiple agents and create a more compelling game play.
Current games are generally not created with multiagent interaction in mind; interaction is not implemented at all or added as an extra feature in a later phase. For games in which interaction is simple, this is not problematic. For example, Quake III has a game mode in which two teams strive to capture each others flag. The player plays one character in a team, while all the other characters, from his own and the opposing team, are computer-controlled agents.
Although the interaction between agents and between agents and the player is limited, the game conveys the feeling of a dynamic interactive world. The same can be said about the communication between characters in F.
Although the communication looks quite natural it is actually added to the interaction scene afterward. It thus serves more to enhance reality than that it has a function in the gameplay!
See [ 3 ] for a description of the problems encountered. If the interaction in a game becomes more complex and the multiagent interaction is not an intricate part of the design process, some unexpected or unbelievable behavior might occur. But when they swapped the items, so that the raker was given a broom and the sweeper was given the rake, in the end one of them killed the other so he could get the proper item.
If the communication between agents in this game would have been possible, they could have communicated about their goals, and solved their problem. In the academic community, much work has been done on sharing, exchanging, and rejecting goals [ 30 ]. So far, this has not been absorbed by the game developer community. Current games also do not facilitate multiple agents requiring complex decision making. In order to generate agent behavior, complex computation may be required.
For instance, in a real-time strategy game, an opponent agent needs to observe the playing field, assess the state of his own units, make an assessment of the strategy of its opponents, generate a strategy, form a plan to execute that strategy, coordinate plans with other agents within the same faction, and in some cases evaluate actions in order to learn from them for future battles. Depending on the algorithms used, this can take considerable processing time.
Current games make high demands on computer processors in order to display graphics, simulate physics, create 3D audio, and perform network communication, amongst others.
Many games are, therefore, forced to minimize the processing time used for individual agents. If each agent has its own reasoning process running in parallel to generate behavior, this can spiral out of control quickly. This is certainly the case in games with many characters in a scenario. Current games, therefore, often forego the generation of complex behavior and script the behavior of nonplaying characters.
For instance, in a first-person shooter game, two computer-controlled players happen to be within equal distance of a power-up.
In the current game AI design approaches, such players enter a scripted line of reasoning, resulting in the decision to retrieve the power-up. This will lead them toward the same area in the game and within the shooting range of each other. This behavior is an example of nonrealistic behavior due to oversimplification in a script. A human player expects the entire game world to persist even when not present in a particular area.
Many games have an optimized design that allows a game to be compressed to events, behaviors, and reactions that directly surround the player, and therefore, only the ones visible to the player. So when a nonplaying character falls out of the scope of the player, the game engine no longer simulates the interaction between a nonplaying character and its environment.
Thus the game is optimized and the demands on computer hardware are reduced. However, simulating only parts of the game world might result in unrealistic behavior. For instance, in large first-person shooter games, the positions of guards are reset or their behavior no longer updated when the player has reached a certain distance.
Each time the player returns to the initial area, the guards will be at the same places or even have become alive again while they were killed before. The example shows that simulating an agent depending on the position of the play can lead to discontinuities in the game world. Conclusion Most state-of-the-art games use a server-side model with tightly integrated agents. As we have seen, this approach restricts the reasoning time of the agents considerably.
An asynchronous solution is more suitable and will be used as a starting point in the next sections. Translating the raw game data to information more suitable for the agents is done in most computer games, but usually in a very restrictive way. In Section 5. Many of the current games do not use communication at all, and if they do, only for simple tasks and in an ad hoc fashion.
Modern games are not created with multiagent interaction in mind. This results in games without or with very simple interaction, or in unexpected behavior in more complicated scenarios.
We propose to make the agent interaction an intricate part of the whole development process. In this section, we describe some applications of serious games that really leverage intelligent agent technology in a way that is currently not practiced. These examples serve to illustrate the usefulness of our approach. The examples in Section 3. Section 3. The area of synchronization is addressed throughout the whole section.
Besides the purpose of entertainment, games are also used for training and education. These so-called serious games are for example used for the training of pilots, soldiers, and commanders in crisis situations. The training scenarios often involve complex and dynamic situations that require fast decision making. By interacting with these games, the player learns about the consequences of his actions from the reactions of the environment and other nonplayer characters to his behavior.
Several approaches of self-explaining agents have been proposed [ 32 — 34 ]. In addition to performing interesting behavior, such agents are able to explain the underlying reasons for it afterward. By understanding the motivations of the other characters in the game, the player learns how his behavior is interpreted by others.
An example of a serious game to which explanation capabilities could be added is virtual training for leading firefighters. In such training, the player training to become a leading firefighter has to handle an incident in the game, and is surrounded by virtual characters representing his team members, police, bystanders, or victims.
A possible scenario is a fire in a building. During the training session, the player commands his team members to go inside a building and extinguish a fire. To better understand the situation, he might ask the virtual characters to explain their behavior. Their possible answer is that they saw a victim inside the building, and decided to save the victim first before extinguishing the fire. The scenario just given is described on a high level.
The virtual characters get commands from the player such as go to the building, find the fire , and extinguish the fire. When they explain their behavior, they refer to abstract concepts such as priorities between different tasks saving a victim has priority over extinguishing a fire. However, the abstract decisions that the characters make result into actions that have to be executed and visualized in the virtual environment. Besides acting in the environment, agents sense their environment and information goes from the game engine to the agent.
The low-level information that is made available by the engine is not immediately useful to the agents. Instead of the exact positions of all the entities and objects in the game at every time step, agents use abstractions, for example, someone is going inside a building, exploring a building takes some time, and the entity in the building is a victim who needs help. The low-level information provided by the game engine needs to be translated to concepts that are useful for the agent.
For instance, information about the course of the coordinates of a character could be translated to the more abstract description that the character enters a building , and if a state holds for a certain amount of time steps, this could be translated to the high-level concept for a while.
After translating the available low-level information to concepts that agents use, an agent itself can select which of the high-level information will influence its future actions.
For instance, agents implemented in a BDI programming language appropriate for the addition of explanation capabilities. Concepts such as goals, beliefs, and plans are explicitly represented in BDI agents and thus available for reasoning and the generation of explanations. Moreover, it has been demonstrated that BDI agents are suitable for developing virtual nonplayer characters for computer games [ 35 ]. A nonplaying character however needs to act in and sense its virtual environment, in which other representations of the game world are used.
The example illustrates the need of a middle layer in serious gaming, where a translation between the two representation levels takes place. Multiple intelligent nonplaying characters bring additional challenges to game design.
Currently there are few facilities that allow efficient multiagent behavior. Issues that should be addressed are for example how an agent determines whether there are other agents in the game. If so, how can it communicate with these other agents?
How does it know that a message has reached the intended agent? How is information filtered such that it allows an agent to reason about social concepts, for example, about groups, group goals, and roles within a group? In the firefighting scenario sketched in the previous subsection, the team members of the leading firefighter player are intelligent agents nonplayers. Although they have to execute the commands of the player, they still need intelligence of their own.
In the first place because they might take initiatives by themselves; in the scenario the nonplaying characters decided to save the victim first instead of extinguishing the fire as the commander had told them. Second, because they act in a team and have to coordinate their actions with each other. For instance, if the group has to decide whether to go left or right, they have to communicate to each other in order to make a common decision.
Or, only one of the characters needs to carry an axe for opening doors, but the others have to know that one of the team members is responsible for this task. Suppose that a team of firefighters goes into a building with the goal to extinguish a fire. One of the members is responsible for opening locked doors and another has to extinguish the fire.
If the first carries an axe and the second an extinguisher, this will work smoothly. However, the situation in which the door opener carries an extinguisher and the fire extinguisher and axe is more complex and requires communication.
The door opener has to be aware of the other character, come up with the idea to communicate with it, send the right message, wait—long enough—for the result, and finally connect the right action to it. The next action of the door opener depends on the information it receives from the fire extinguisher. We believe that the communication between different agents in a game should go through the game engine instead of taking place on the agent platform because the effect of communication has influence on the game world itself and not only on the agents.
For instance, if the two agents in the scenario successfully communicated and decided to exchange their tools, this needs to happen physically in the virtual environment as well. If communication would not go through the game engine, there is a danger that processes in the game world and between the agents are no longer synchronized. For example, if the agents agree to swap items, they would both send a message to the game engine and believe that the items will be successfully exchanged in the game world.
This however is not obvious. The actual swap in the virtual world is managed by the game engine, for example, one agent puts down its tool, has its hands free to receive the other tool, and the other agent picks up the tool from the ground.
For such a process, it is crucial that the game engine receives the messages from both agents at the same time, or at least connects them to each other. This can be better realized if the game engine is included into the communication loop. In turn, physical changes in the world have effect on communication as well.
For example, if the door opening agent asks a team member to take over, it expects this member to come and take his axe. By perception, the agent derives whether its colleague perceived the message and decided to assist, or if it should communicate more. The colleague might have a good reason to refuse, for example, it has to assist a third agent already. It could communicate this to the requesting agent. The timing of this communication and the action to help the third agent should be synchronized; otherwise the requesting agent might for example unjustly belief that it is being ignored.
Such timing is facilitated by including communication into the game loop. Further issues concerning careful time management include a translation of time for the game engine to time for the agents.
For instance, if the door opening agent sends the message what tool are you carrying? To the other agent, it expects a reaction. It is not realistic to expect a response directly in the next time step, the game engine could give priority to other processes first and the other agent might need some time to reason about the question.
However, the agent also should not wait indefinitely because it could be that the message never arrived, or that the other agent misunderstood the content, and so forth. So after a certain amount of time, the agent has to react, for example, by sending the same message again, or by sending a message did you understand my previous message? In the middle layer, a translation of time for the game engine a number of time steps to time for the agents time in which a reaction could be expected has to be made.
The examples in this subsection aim to make clear that communication is more than just an exchange of information. After sending a question or a command, the sender expects an answer or action.
If it does not see an effect of its communication action for whatever reason, the sender will react on that. Decisions of agents depend on the information they receive by communication and perception, and their communication actions have effect on the game world and the behavior of other agents.
Therefore, the communication processes and the actions in the game world have to be well synchronized. In Section 2 , we have shown current approaches to integrate agents in game engines.
It is clear that those solutions are pragmatic but do not really give room to fully use all aspects of agent technology in the game environment. In Section 3 , we have illustrated how agent technology can contribute to the use of game for serious purposes and a more compelling interaction between NPC characters. To overcome issues with synchronization, information representation, and communication, we analyze the connection between game and agent technology from three different perspectives, that is, the infrastructural, conceptual, and design perspectives.
For our solution, we look at the connection between the agents and the game engine starting from infrastructural point of view. The main requirement is that on the one hand the game engine should have some control over the actions of the agents in order to control the overall game play and preserve physical realism. For instance, if an agent wants to move in a straight line to a position in the game world, but there is a wall in between the agent and that point, then the game engine will prevent the agent from moving to the point it wants to get to, that is, the agent cannot just move through walls.
On the other hand, the agents should be autonomous to a certain level. For instance, if an agent is walking to a way point, but is reconsidering his decision and wants to turn back, it should not first have to walk to the way point and only there be able to turn back. Also, we want the agents to be able to keep reasoning full time and not being restricted to specific time slots allocated by the game engine. An important consideration in the connection between the agents and the game engine is which information is available to the agent and when and how does it get that information.
Moreover, we have to consider when agents can perform actions in the game and which actions are available to the agent. With respect to the latter, one should think more in terms of abstractions than in terms of forbidden actions. For example, can an agent open a door or should it manipulate a door object position to another position?
Often the translation between these types of actions is provided for the avatars steered by the user. However, it is not clear that the same set of translations applies for the nonplaying characters in the game. For example, current animation engines are capable of performing rudimentary path planning.
This means that actions become available to characters to move through a room without bumping into any object with one command. These commands might not be available for the human players, but are very efficient for the nonplaying characters. The above considerations all relate to the connection of a single agent to the game engine. In general, one would like to connect a complete multiagent system to the game in which the agents also can communicate and coordinate their actions.
In order to fully profit from agent technology, one would want especially to have the agents using their own high-level communication protocols that facilitate coordination. These communication facilities are standard provided by the agent platforms on which the agents reside normally. So they are not very suitable for this type of communication, unless we extend them considerably.
The next question thus becomes how to connect the agent platforms to the game engine. Several solutions are possible. First, one can integrate the functionality of these platforms in the game engine.
In this case, the agents can be built as if they are running on an agent platform. Second, one can distribute the functionality over the game engine and the agents.
This means that some rudimentary functionality is incorporated in the game engine, but the agents have to get some more elaborate communication functionalities to compensate for the loss of some features. For example, they might have to keep track of the other agents they can communicate with storing agent names and addresses. A last option is to let the agents run on their own platform and connect the platform to the game engine.
One problem with this option is that the platform runs in parallel to the game engine and all types of interactions between the agents are not available to the game engine. This might potentially lead to a loss of control and inconsistencies between the agents and the game engine. We will opt for a position in the middle. We will transfer some of the communication functionalities to the game engine to preserve consistency and control.
However, we also will keep the agents running within their own platform. This is mainly done for some other facilities provided by the platforms, such as efficient sharing of reasoning engines by the agents and monitoring and debugging interfaces for the agents.
The last parts are important for designing and implementation, but can be decoupled in the runtime version of the game. In order to address all issues, we divide the connection into three stances: an infrastructural, a conceptual, and a design stance.
As indicated above, the infrastructural connection requires adjustments on both the agent as well as on the game engine side.
Therefore, although the connection principles might be platform independent, the actual implementation will not be completely platform independent. The standard way to ameliorate this point is to create a middleware API. Basically, connecting agent platforms to game engines is not different from connecting any other software together. So, in the end, we also will make use of the means available to connect independent threads of software. However, what is different is the perspective.
In most applications that connect software, there will be a single thread of control that is well defined. In our case, we want a kind of shared control that is different from traditional software solutions. It means that our infrastructural solutions should take this perspective of shared control already in mind and be as flexible as possible in order to define the way control is shared on higher levels. So, in our middleware, one can define the standard constructions that we assume to exist on both sides, but the way they work together is kept as flexible as possible.
The exact sharing of control is defined in the infrastructural stance. We describe the infrastructural stance in more detail in Section 5. The translations between information representations that are needed to connect the agents to the game are described using a conceptual stance. Most important will be the translation of actions of the agent into actions within the game engine and translations of changes in the world into percepts that can be handled by the agent.
We aim to use the high-level architecture HLA standard for this purpose. This stance is described in Section 5. Finally, it is important to incorporate the agents explicitly in the design method of the games. The type of data that has to be generated or kept depends crucially on the ways that the agents need to use them. Therefore, if the world is first created and the agents are only added in the end, they might not have enough information available to act intelligently.
For example, if an agent has to take cover it should know the distinction between an iron bar fence and stone wall of the same dimensions. If the only data available is that there is an obstacle of certain dimensions, this information can hardly be deduced. Designing the environment with the possible actions and perceptions of the agents in mind will drastically change the way the world is created.
In Table 1 , we summarize how the different issues that we focused on are dealt with within the different stances. In this table, we denote the technique that is used in a particular stance to deal with an issue. Please note that the issues are not all of the same type.
Synchronization, for example, is a technical issue that is, therefore, not really discussed in the design stance. In contrast, communication is such a general issue that it has elements that are dealt with in all the different stances. In this section, we will discuss the three stances infrastructural, conceptual, and design stance in our approach more extensively. For each of them, we will indicate their contribution to gaming scenarios as described in the previous section.
As argued before, the topics of synchronization, information filtering, and communication play a fundamental role in coupling games and agents. So they all will be covered in this section as well. Synchronization is mainly addressed in the subsection about the infrastructural stance.
Information filtering receives most attention in the subsection about the conceptual stance. Communication involves several aspects; it is, therefore, discussed in all of the three subsections. In our approach, we view the game engine and agents as asynchronous processes because, as discussed in Section 2 , agents that are part of the game loop are restricted in their reasoning by time.
Therefore, we believe that a synchronous approach is not suitable for intelligent agents with complex reasoning processes. Although we are investigating a coupling between two specific types of asynchronous processes, infrastructurally our case is similar to other asynchronous couplings.
There are four basic tasks that need to be performed by the infrastructure. First, information about the game environment needs to be provided to the agents to allow them to reason about the game. Second, the actions that the agents have selected to perform in the game need to be transferred to the game engine to allow them to be executed. Third, the communication between agents can be effected by the game environment and thus needs to flow from the agents, through the game engine, back to the agents.
The number of days a property has been on the market days-on-market, or DOM is like the odometer on a car. The perception is that properties which have been on the market a long time are either less desirable or have a major defect of some sort, like a car with , miles on it. While this isn't always the case, the fact remains that having a lot of miles on your DOM number hurts an agent's ability to sell that property.
In fact, it hurts so much so that agents often relist a property on the MLS so that the DOM number rolls back to zero. Fortunately, MLS systems keep a running record of when a property has been listed and the changes made to those listings.
In Silicon Valley, you can take a property off the market e. More clever manipulators of these numbers will often subtly change the address of the property from, say, Webster St.
A broader search of the MLS incorporating more wildcards will most likely expose this type of manipulation. Pocket listings are houses for sale that are not listed on the MLS. One reason why agents do this is to keep a property's odometer fresh during slow holiday months. It doesn't do a lot of good for a seller to have a property, particularly an expensive one, on the market when no one's looking.
Another reason why agents use pocket listings is to try and complete both sides of the transaction and take both halves of the commission. After all, if no one else knows about the property for sale, no one else can encroach on the deal.
But representing both the buyer and the seller creates conflicts of interest , and when you use a double agent called a dual agent , you may have given up your right to have someone who only represents you. Would you go to court and use your opponent's lawyer?
Of course not.
0コメント