The term agent is highly overused. On one hand we have systems like SNMP agents that are nothing more than servers providing data defined by Management Information Bases (MIBs) to their clients, management applications. On the other side of the spectrum, there are expert systems with huge knowledge bases, which are also considered agents due to their intelligent-like behavior.
We will consider an agent to be a computational entity with the following characteristics:
Most of the research on the intelligence aspects
of agents comes from Distributed Artificial Intelligence (DAI).
Traditional Artificial Intelligence (AI) is concerned with discovering
methodologies and technologies that address so-called hard problems.
A
hard problems is a problem that is too hard to resolve using traditional,
analytical means, or whose solution would take a prohibitive length of
time to obtain or to execute. AI does not strive to always provide an exact
solution to a given problem. Very often, a solution that is good enough
suffices. Usually, an evaluation function available provides a measure
of goodness.
Distributed AI is an extension of the AI ideas
that applies to Multi-Agent Systems (MAS). Instead of one
centralized and usually very large application that encodes complete intelligence
of the system, a number of relatively small systems, that are called agents,
are involved in an cooperative effort to resolve a problem. It does not
imply that the big system is divided into smaller pieces. For example,
several centralized applications that are capable of addressing certain
aspects of a problem can be tied together by a communication system. It
would allow for exchange of their viewpoints and coming up with strategies
to make progress or to combine the results into a solution. This kind of
problem solving is called Distributed Problem Solving (DPS).
Multi-Agent Systems come in two flavors. Cooperative Distributed Problem Solving or Cooperative Multi-Agent Systems (CMAS) consist of agents that cooperate to attain a common goal.
Self-Interested Multi-Agent Systems (SMAS) are societies of competitive agents, who are interested in attaining their individual goals. Antagonistic behavior does not preclude some degree of cooperation, because even self-interested agents may form alliances if that can help to satisfy their selfishness.
To fulfill their purpose, member agents of a MAS have to be able to communicate, coordinate their individual behaviors and negotiate compromises with others.
Agent Communication Languages (ACL)
are used for inter-agent communication. These languages are based on speech
act theory, which comes from Psychology. It is an attempt to formalize
the ways humans use language to achieve everyday tasks like requests, orders,
promises, etc.
ACLs range from some form of primitive communication to elaborated standards. In primitive communication, there are only a few signals that the agents send one to another. At the next level of complexity, agents may pass ad hoc messages. For example, in actor languages, agents called actors are completely reactive; i.e., they perform computations in response to received messages. The use of blackboards is also widespread. A blackboard provides a medium for exchanging data, which is written on the blackboard by participating agents. Any agent can get access to partial solutions and messages posted on the blackboard, so it may incorporate the views of others into its own. It may also ask the community for other data.
Standard ACLs provide communication means that allow for communication between agents coming from various sources. Knowledge Query and Manipulation Language (KQML) is an evolving standard developed under the umbrella of Knowledge Sharing Effort (KSE) funded by DARPA. KQML can be viewed at three levels. The content layer specifies the actual messages. The message layer comprises performatives provided by the language, like tell, reply, advertise, ask-if, etc. The protocol for message delivery defines the communication layer.
Knowledge Interchange Format (KIF) is a language for encoding knowledge. It can be used with KQML to format messages that are passed between agents.
Coordination can be ensured by a number of ways. In organizational coordination, the agents are placed into certain organizational (e.g., hierarchical) structure that enforces certain communication patterns (e.g., master-slave or parent-child relationship). Sometimes, a centralized coordinator is required to enforce the patterns. For example, an access to a blackboard might be controlled by a supervising agent. A primary example of organizational coordination, multi-agent planning, is well rooted in the traditional Artificial Intelligence. A number of agents attempt to construct a global problem resolution plan. Individual plans are to avoid scenarios that would conflict with the plans of others attempting to satisfy global constraints. In a centralized approach, it is achieved through combining a number of individual plans in a process coordinated by a supervisor. In a distributed approach, agents share their plans until all conflicts are removed from the individual plans.
Organizational coordination does not scale well and may involve bottlenecks and central points of failure.
An alternative is contracting. In this scheme, agents use a Contract-Net Protocol (CNP) to establish contract relationships with other agents. Agents can act as both managers and contractors. When an agent has a task that it is not capable of achieving, it advertises the job to other agents. Agents that can do the job submit bids. The manager uses certain criterion to select one agent out of all bidders. The selected agent becomes a contractor after signing the contract. A contractor may subcontract its tasks to others.
Although communication intensive, contracting is very flexible as agents can be added at will, presuming that they adhere to the rules of the CNP. However, an issue of trust comes into play. The manager has to have certain degree of trust in the ability of the bidders to fulfill the contract. For example, the level of trust can be determined by past experiences. If a painter did not do a good job, your house will be painted by somebody else next time, unless the price is so good that it counterbalances a few imperfections.
Associated with trust are social rules that define common beliefs of agents. An agent may base his actions upon its understanding of the current situation according to the data describing the situation and the social rules of the system. For example, a driver crossing an intersection on green lights believes that the cars coming from both sides see red lights and stop. The agent trusts that the other agents play by the same rules.
Formally, a negotiation is a process of resolving conflicts and reaching a consensus through inter-agent communication. This process can have cooperative or competitive nature, which depends on whether the agents try to attain one global goal or satisfy their own goals.
Every action of an agent is based on its beliefs, desires and intentions. The BDI model implies that an agent has certain set of data that it considers to be true (beliefs). The efforts that the agent undertakes to attain its goals are consequences of agent desires. Desires can be built-in by the designer of an agent, or they can be generated in response to the changes in the environment or interactions with other agents. While desires describe potential courses of actions, actual plans for the future are represented by agent intentions.
In the BDI architecture, a special process, an interpreter, controls the behavior of an agent. It updates agent’s beliefs from observations made in the real world, generates desires in response to the changes in agent’s beliefs, and selects certain desires, which will become agent’s intentions for the future endeavors.
A plan library is a repository of a collection of plans that are available to the agent. An agent is receiving input from the environment through receptors, and acts on the environment by issuing command to its effectors (manipulators).
Agents can be categorized along several lines.
With respect to the task that an agent performs, we can differentiate between
user
agents and service agents. User agents are owned by a user and
they attempt to satisfy user’s needs. An example of a user agent is an
interface agent that we will describe in the next section. Services agents
provide certain services to general public or are a part of the infrastructure
that supports provision of services. Service agents are not owned by any
particular user.
Location is another line of categorization. The agents can be stationary or mobile. A stationary agent runs on a certain node and fulfils its purpose either locally or through communication with others. In contrast, mobile agents are capable of migrating from one location to another and execute within various environments. They have direct access to data in many locations.
With respect to source of intelligence, agents can be programmable or learning. A programmable agent obtains all of its desires from explicit instructions provided by its creator. A learning, or adaptable, agent can acquire some of its skills. For example, a user agent may be designed to build a profile of the user from the interactions with the user. An agent may also rebuild the library of its plans, if the interactions with the environment justify modifications. A very popular and powerful strategy is reinforcement learning. In this strategy, an agent receives a feedback on the results of the performed task. If the feedback is positive, then the way the task has been performed is reinforced; i.e., it will be more likely used in the future. A number of adaptation techniques can be used. They range from more or less formal methods based on symbolic learning to fuzzy approaches based on neural networks, genetic algorithms, fuzzy sets, etc. They can be used within
An interface agent is a user agent that acts
as an intermediary between the user and the rest of the networked world.
Main applications of interface agents are in data filtering, information
retrieval and personal assistance.
Data filtering agents use the user instruction and/or user and device profile to reject incoming information that in their opinion does not interest the user. For example, a neural network has been used to determine which news postings should be presented to the user. The outside sources feed vast amounts of data to the user, but only some of it passes the interface agent.
In contrast to filtering agents, information retrieval agents are proactive. They also limit the amount of data with which the user is presented. They achieve this goal by requesting information in which their owner is interested according to their beliefs. They can be stationary or mobile. A stationary information retrieval agent communicates with remote sources of information. If interesting data is reported, then the agent organizes its delivery. The agent runs on the user’s workstation at all times. A mobile agent can be used instead to move between data sources and check the data locally. A mobile agent does not consume local resources.
A Personal Digital Assistant (PDA) is an agent that helps its owner with routine tasks. In addition to user profile that includes user’s abilities to perform tasks in certain domain, a PDA has knowledge of that domain and expertise in using it. The expertise can be hard-coded by the creator of the agent, or it can be acquired. For example, a novice agent which is exposed to several users with various degree of experience may use the knowledge that it collected from more experienced user to help others. Generalization techniques can be used, which allow the user to be creative. For example, an agent may notice that a user performs certain tasks that are not necessary to achieve the goal.
A mobile agent is an agent that can be move between
several locations in a heterogeneous environment. The use of the term agent
implies that a mobile agent is characterized by the same features as a
stationary agent. In addition to defining the basic agent model,
a mobile agent has to comprise a life-cycle model, a computational
model, a security model, a communication model and a
navigation
model.
The agent model is the same as for the agents that we discussed so far.
The lifecycle model describes how the agent is
created, initiated, suspended, restarted, stopped, deleted, etc. Usually,
an attribute would be used to indicate the state of the agent, which describes
a particular phase in the life of an agent. Two types of agents can be
distinguished with respect to agent lifecycle. Persistent agents
are capable of saving their execution environment, so it can be restored
in another location after the agent has migrated. Such agents suspend their
execution on one node and restart from exactly same point on another node.
Task-based agents restart from the same initialization point at
each visited location. They can still organize transfer of certain data
along with them, but the execution environment is lost.
The computational model refers to computational
capabilities of a mobile agent. The model defines the way in which the
agent is executed. Additionally, a set of primitives that can affect agent’s
execution is usually available. They include data manipulation services
and thread management.
The computational model determines the ability
of agents to communicate with other mobile or stationary agents, with hosts
and with other service providing distributed objects; e.g. CORBA or RMI
components.
A mobile agent will be exposed to many environments, so it may need several communication models.
The navigational model defines the migration
capabilities of an agent. An agent can use default facilities and patterns
of the host, or it can implement them. Several services are required for
such an implementation. The most important are the services to ship and
receive agents. A naming service can be used to discover and resolve migration
destinations. An agent should be able to determine whether the remote agent
environment can support its migration. It also should be able to suspend
its execution before any migration takes place.
The following areas may benefit from appropriate use of mobile agents.