In the last decade, multi-agent systems have both become widely applied and also
increasingly complex. The applications include the use of agents as automous decision
makers in often safety-critical, dangerous, or high impact scenarios (traffic
control, autonomous satellites, computational markets). The complexity arises from
the fact that not only do we expect the agent to make decisions in situations that are
not anticipated at forehand, the agent also interacts with other complex agents, with
humans, organisations, and it lives in a dynamic environment (activators of an agent
can fail, communication is prone to error, human response may be ambiguous, rules
of an organisation may leave behaviour open or over-constrained, and environments
may change ‘spontaneously’ or as a result of other agents acting upon it).
Taking these two facts together call for a rigorous Specification and Verification
of Multi-Agent Systems. Since intelligent agents are computational systems, it is no
wonder that this activity builds upon and extends concepts and ideas of specifying
and verifying computer-based systems in general. For one, an axiom is that the
tools are mainly logical, or in any case formal. But although traditional techniques
of specification and verification go a long way when reasoning about the correctness
of a single agent, there are additional questions already to be asked at this level: do
we ‘only’ require the agent to behave well under a pre-defined set of inputs, or do
we want to allow for (partially) undefined scenarios? And do we care about the ‘correctness’
of an agent’s beliefs, desires and intentions during a computation? How do
we want to guarantee that the agent ‘knows what he is doing’, has ‘reasonable desires’
and ‘only drops an intention if it(s goal) is fulfilled, or cannot be reasonably
be fulfilled any longer’? And a predecessor of this ‘how to guarantee’ question is
equally important: what do we exactly mean by those requirements?