- Source: Reactive planning
In artificial intelligence, reactive planning denotes a group of techniques for action selection by autonomous agents. These techniques differ from classical planning in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable environments. Second, they compute just one next action in every instant, based on the current context. Reactive planners often (but not always) exploit reactive plans, which are stored structures describing the agent's priorities and behaviour. The term reactive planning goes back to at least 1988, and is synonymous with the more modern term dynamic planning.
Reactive plan representation
There are several ways to represent a reactive plan. All require a basic representational unit and a means to compose these units into plans.
= Condition-action rules (productions)
=A condition action rule, or if-then rule, is a rule in the form: if condition then action. These rules are called productions. The meaning of the rule is as follows: if the condition holds, perform the action. The action can be either external (e.g., pick something up and move it), or internal (e.g., write a fact into the internal memory, or evaluate a new set of rules). Conditions are normally boolean and the action either can be performed, or not.
Production rules may be organized in relatively flat structures, but more often are organized into a hierarchy of some kind. For example, subsumption architecture consists of layers of interconnected behaviors, each actually a finite state machine which acts in response to an appropriate input. These layers are then organized into a simple stack, with higher layers subsuming the goals of the lower ones. Other systems may use trees, or may include special mechanisms for changing which goal / rule subset is currently most important. Flat structures are relatively easy to build, but allow only for description of simple behavior, or require immensely complicated conditions to compensate for the lacking structure.
An important part of any distributed action selection algorithms is a conflict resolution mechanism. This is a mechanism for resolving conflicts between actions proposed when more than one rules' condition holds in a given instant. The conflict can be solved for example by
assigning fixed priorities to the rules in advance,
assigning preferences (e.g. in Soar architecture),
learning relative utilities between rules (e.g. in ACT-R),
exploiting a form of planning.
Expert systems often use other simpler heuristics such as recency for selecting rules, but it is difficult to guarantee good behavior in a large system with simple approaches.
Conflict resolution is only necessary for rules that want to take mutually exclusive actions (c.f. Blumberg 1996).
Some limitations of this kind of reactive planning can be found in Brom (2005).
= Finite State Machines
=Finite state machine (FSM) is model of behaviour of a system. FSMs are used widely in computer science. Modeling behaviour of agents is only one of their possible applications.
A typical FSM, when used for describing behaviour of an agent, consists of a set of states and transitions between these states. The transitions are actually condition action rules. In every instant, just one state of the FSM is active, and its transitions are evaluated. If a transition is taken it activates another state. That means, in general transitions are the rules in the following form: if condition then activate-new-state. But transitions can also connect to the 'self' state in some systems, to allow execution of transition actions without actually changing the state.
There are two ways of how to produce behaviour by a FSM. They depend on what is associated with the states by a designer --- they can be either 'acts', or scripts. An 'act' is an atomic action that should be performed by the agent if its FSM is the given state. This action is performed in every time step then. However, more often is the latter case. Here, every state is associated with a script, which describes a sequence of actions that the agent has to perform if its FSM is in a given state. If a transition activates a new state, the former script is simply interrupted, and the new one is started.
If a script is more complicated, it can be broken down to several scripts and a hierarchical FSM can be exploited. In such an automaton, every state can contain substates. Only the states at the atomic level are associated with a script (which is not complicated) or an atomic action.
Computationally, hierarchical FSMs are equivalent to FSMs. That means that each hierarchical FSM can be converted to a classical FSM. However, hierarchical approaches facilitate designs better.
See the paper of Damian Isla (2005) for an example of ASM of computer game bots, which uses hierarchical FSMs.
= Fuzzy approaches
=Both if-then rules and FSMs can be combined with fuzzy logic. The conditions, states and actions are no more boolean or "yes/no" respectively but are approximate and smooth. Consequently, resulted behaviour will transition smoother, especially in the case of transitions between two tasks. However, evaluation of the fuzzy conditions is much slower than evaluation of their crisp counterparts.
See the architecture of Alex Champandard.
= Connectionists approaches
=Reactive plans can be expressed also by connectionist networks like artificial neural networks or free-flow hierarchies. The basic representational unit is a unit with several input links that feed the unit with "an abstract activity" and output links that propagate the activity to following units. Each unit itself works as the activity transducer. Typically, the units are connected in a layered structure.
Positives of connectionist networks is, first, that the resulted behaviour is more smooth than behaviour produced by crisp if-then rules and FSMs, second, the networks are often adaptive, and third, mechanism of inhibition can be used and hence, behaviour can be also described proscriptively (by means of rules one can describe behaviour only prescriptively). However, the methods have also several flaws. First, for a designer, it is much more complicated to describe behaviour by a network comparing with if-then rules. Second, only relatively simple behaviour can be described, especially if adaptive feature is to be exploited.
Reactive planning algorithms
Typical reactive planning algorithm just evaluates if-then rules or computes the state of a connectionist network. However, some algorithms have special features.
Rete evaluation: with a proper logic representation (which is suitable only for crisp rules), the rules need not to be re-evaluated at every time step. Instead, a form of a cache storing the evaluation from the previous step can be used.
Scripting languages: Sometimes, the rules or FSMs are directly the primitives of an architecture (e.g. in Soar). But more often, reactive plans are programmed in a scripting language, where the rules are only one of the primitives (like in JAM or ABL).
Steering
Steering is a special reactive technique used in navigation of agents. The simplest form of reactive steering is employed in Braitenberg vehicles, which map sensor inputs directly to effector outputs, and can follow or avoid. More complex systems are based on a superposition of attractive or repulsive forces that effect on the agent. This kind of steering is based on the original work on boids of Craig Reynolds.
By means of steering, one can achieve a simple form of:
towards a goal navigation
obstacle avoidance behaviour
a wall following behaviour
enemy approaching
predator avoidance
crowd behaviour
The advantage of steering is that it is computationally very efficient. In computer games, hundreds of NPCs can be driven by this technique. In cases of more complicated terrain (e.g. a building), however, steering must be combined with path-finding (as e.g. in Milani [1]), which is a form of planning .
See also
Behavior based AI
References
Blumberg, B.: Old Tricks, New Dogs: Ethology and Interactive Creatures. PhD thesis, Massachusetts Institute of Technology (1996).
Brom, C.: Hierarchical Reactive Planning: Where is its limit? In: Proceedings of MNAS workshop. Edinburgh, Scotland (2005)
Bryson, J.: Intelligence by Design: Principles of Modularity and Coordination for Engineering Complex Adaptive Agents. PhD thesis, Massachusetts Institute of Technology (2001)
Champandard, A. J.: AI Game Development: Synthetic Creatures with learning and Reactive Behaviors. New Riders, USA (2003)
Grand, S., Cliff, D., Malhotra, A.: Creatures: Artificial life autonomous software-agents for home entertainment. In: Johnson, W. L. (eds.): Proceedings of the First International Conference on Autonomous Agents. ACM press (1997) 22-29
Huber, M. J.: JAM: A BDI-theoretic mobile agent architecture. In: Proceedings of the Third International Conference on Autonomous Agents (Agents'99). Seattle (1999) 236-243
Isla, D.: Handling complexity in Halo 2. In: Gamastura online, 03/11 (2005)
Milani, A., Poggioni, V., Planning in Reactive Environment. In Computational Intelligence, 23(4), 439–463, Blackwell-Wiley, (2005)[2]
Reynolds, C. W. Flocks, Herds, and Schools: A Distributed Behavioral Model. In: Computer Graphics, 21(4) (SIGGRAPH '87 Conference Proceedings) (1987) 25-34.
de Sevin, E. Thalmann, D.:A motivational Model of Action Selection for Virtual Humans. In: Computer Graphics International (CGI), IEEE Computer SocietyPress, New York (2005)
Softimage/Behavior product. Avid Technology Inc.
Tyrrell, T.: Computational Mechanisms for Action Selection. Ph.D. Dissertation. Centre for Cognitive Science, University of Edinburgh (1993)
van Waveren, J. M. P.: The Quake III Arena Bot. Master thesis. Faculty ITS, University of Technology Delft (2001)
Wooldridge, M. An Introduction to MultiAgent Systems. John Wiley & Sons (2009)
Pogamut2. Platform for fast agent prototyping in Unreal Tournament 2004 – using POSH – reactive planner designed and developed by J.J. Bryson.
[3]. Wortham, R. H., Gaudl, S. E. & Bryson, J. J., Instinct: A Biologically Inspired Reactive Planner for Intelligent Embedded Systems, In : Cognitive Systems Research. (2018)
External links
Creatures, an implementation of reactive planning by Grand et al.
Kata Kunci Pencarian:
- Metaloid
- Sesium
- Optimisasi multiobjektif
- Reactive planning
- Planning
- Automated planning and scheduling
- Backward chaining
- CycL
- Knowledge engineering
- Forward chaining
- CLIPS
- Inference engine
- Ontology language