1. Introduction to agents
    1. What is an agent?
There is an apparent need for special means that could help to deal with proliferation of information that we are witnessing and to separate users from the underlying technology. Users should be able to focus on their jobs rather than on taming the ever evolving and heterogeneous environments. When advanced technology was accessible only to technically educated individuals, they were able to overcome appearing challenges. Today, an average user is less prepared to do the same. Even for those, who can handle the technology, handling the increasing amounts of available data is physically impossible because of the required time. These problems are main forces driving the research on agents.

The term agent is highly overused. On one hand we have systems like SNMP agents that are nothing more than servers providing data defined by Management Information Bases (MIBs) to their clients, management applications. On the other side of the spectrum, there are expert systems with huge knowledge bases, which are also considered agents due to their intelligent-like behavior.

We will consider an agent to be a computational entity with the following characteristics:

Acting on behalf of others is an analogy to human agents that can book a trip, sell a house or gather sensitive information on somebody else behalf. A client delegates certain task to an agent that is to be achieved without, or with minimum of his further involvement. After receiving the details of the task, the agent acts autonomously following certain algorithms. Using their skills, agents try proactively to attain the goal defined by the assigned task. They can acquire their skills by being told (e.g., through trade courses), or through expertise (examples, cases, etc.). Agents react to the changes in the available data, by modifying their plans. For example, if a new bargain vacation package is available, then a travel agent may contact the client, who (in the agent beliefs) is interested in such a message. Agents acquire and modify their knowledge in response to experience and exchange of information. Agents communicate to share their knowledge and collaborate in attaining their goals. For example, a trip around the world that can be purchased on any continent is organized by a number of agents, who take care of the plans only on their continent. A plan for the whole trip is put together by combining several sub-plans. Very often, agents have to be mobile to achieve the goal. For example, a struggling Hollywood agent may visit several locations before suggesting a few of them to a director of a new blockbuster. Spies also have to move to get closer to the source of data, because neither the data nor the source can be had otherwise.
    1. Distributed AI

    2. Most of the research on the intelligence aspects of agents comes from Distributed Artificial Intelligence (DAI). Traditional Artificial Intelligence (AI) is concerned with discovering methodologies and technologies that address so-called hard problems. A hard problems is a problem that is too hard to resolve using traditional, analytical means, or whose solution would take a prohibitive length of time to obtain or to execute. AI does not strive to always provide an exact solution to a given problem. Very often, a solution that is good enough suffices. Usually, an evaluation function available provides a measure of goodness.

      1. Distributed problem solving

      2. Distributed AI is an extension of the AI ideas that applies to Multi-Agent Systems (MAS). Instead of one centralized and usually very large application that encodes complete intelligence of the system, a number of relatively small systems, that are called agents, are involved in an cooperative effort to resolve a problem. It does not imply that the big system is divided into smaller pieces. For example, several centralized applications that are capable of addressing certain aspects of a problem can be tied together by a communication system. It would allow for exchange of their viewpoints and coming up with strategies to make progress or to combine the results into a solution. This kind of problem solving is called Distributed Problem Solving (DPS).

        Multi-Agent Systems come in two flavors. Cooperative Distributed Problem Solving or Cooperative Multi-Agent Systems (CMAS) consist of agents that cooperate to attain a common goal.

        Self-Interested Multi-Agent Systems (SMAS) are societies of competitive agents, who are interested in attaining their individual goals. Antagonistic behavior does not preclude some degree of cooperation, because even self-interested agents may form alliances if that can help to satisfy their selfishness.

        To fulfill their purpose, member agents of a MAS have to be able to communicate, coordinate their individual behaviors and negotiate compromises with others.

      3. Inter-agent communication

      4. Agent Communication Languages (ACL) are used for inter-agent communication. These languages are based on speech act theory, which comes from Psychology. It is an attempt to formalize the ways humans use language to achieve everyday tasks like requests, orders, promises, etc.

        ACLs range from some form of primitive communication to elaborated standards. In primitive communication, there are only a few signals that the agents send one to another. At the next level of complexity, agents may pass ad hoc messages. For example, in actor languages, agents called actors are completely reactive; i.e., they perform computations in response to received messages. The use of blackboards is also widespread. A blackboard provides a medium for exchanging data, which is written on the blackboard by participating agents. Any agent can get access to partial solutions and messages posted on the blackboard, so it may incorporate the views of others into its own. It may also ask the community for other data.

        Standard ACLs provide communication means that allow for communication between agents coming from various sources. Knowledge Query and Manipulation Language (KQML) is an evolving standard developed under the umbrella of Knowledge Sharing Effort (KSE) funded by DARPA. KQML can be viewed at three levels. The content layer specifies the actual messages. The message layer comprises performatives provided by the language, like tell, reply, advertise, ask-if, etc. The protocol for message delivery defines the communication layer.

        Knowledge Interchange Format (KIF) is a language for encoding knowledge. It can be used with KQML to format messages that are passed between agents.

        1. Cooperation
        Coordination is needed in Multi-Agent Systems to prevent chaos, satisfy global constraints, explore distinctive expertise, and synchronize individual behaviors of agents.

        Coordination can be ensured by a number of ways. In organizational coordination, the agents are placed into certain organizational (e.g., hierarchical) structure that enforces certain communication patterns (e.g., master-slave or parent-child relationship). Sometimes, a centralized coordinator is required to enforce the patterns. For example, an access to a blackboard might be controlled by a supervising agent. A primary example of organizational coordination, multi-agent planning, is well rooted in the traditional Artificial Intelligence. A number of agents attempt to construct a global problem resolution plan. Individual plans are to avoid scenarios that would conflict with the plans of others attempting to satisfy global constraints. In a centralized approach, it is achieved through combining a number of individual plans in a process coordinated by a supervisor. In a distributed approach, agents share their plans until all conflicts are removed from the individual plans.

        Organizational coordination does not scale well and may involve bottlenecks and central points of failure.

        An alternative is contracting. In this scheme, agents use a Contract-Net Protocol (CNP) to establish contract relationships with other agents. Agents can act as both managers and contractors. When an agent has a task that it is not capable of achieving, it advertises the job to other agents. Agents that can do the job submit bids. The manager uses certain criterion to select one agent out of all bidders. The selected agent becomes a contractor after signing the contract. A contractor may subcontract its tasks to others.

        Although communication intensive, contracting is very flexible as agents can be added at will, presuming that they adhere to the rules of the CNP. However, an issue of trust comes into play. The manager has to have certain degree of trust in the ability of the bidders to fulfill the contract. For example, the level of trust can be determined by past experiences. If a painter did not do a good job, your house will be painted by somebody else next time, unless the price is so good that it counterbalances a few imperfections.

        Associated with trust are social rules that define common beliefs of agents. An agent may base his actions upon its understanding of the current situation according to the data describing the situation and the social rules of the system. For example, a driver crossing an intersection on green lights believes that the cars coming from both sides see red lights and stop. The agent trusts that the other agents play by the same rules.

      5. Negotiation
      If the contracting mechanism allows for bargaining between the manager and the bidders, then the communication process becomes a negotiation. For example, an agent can use a strategy of constraint relaxation to submit bids that become increasingly more attractive to the manager.

      Formally, a negotiation is a process of resolving conflicts and reaching a consensus through inter-agent communication. This process can have cooperative or competitive nature, which depends on whether the agents try to attain one global goal or satisfy their own goals.

      Every action of an agent is based on its beliefs, desires and intentions. The BDI model implies that an agent has certain set of data that it considers to be true (beliefs). The efforts that the agent undertakes to attain its goals are consequences of agent desires. Desires can be built-in by the designer of an agent, or they can be generated in response to the changes in the environment or interactions with other agents. While desires describe potential courses of actions, actual plans for the future are represented by agent intentions.

      In the BDI architecture, a special process, an interpreter, controls the behavior of an agent. It updates agent’s beliefs from observations made in the real world, generates desires in response to the changes in agent’s beliefs, and selects certain desires, which will become agent’s intentions for the future endeavors.

      A plan library is a repository of a collection of plans that are available to the agent. An agent is receiving input from the environment through receptors, and acts on the environment by issuing command to its effectors (manipulators).

    3. Categories of agents

    4. Agents can be categorized along several lines. With respect to the task that an agent performs, we can differentiate between user agents and service agents. User agents are owned by a user and they attempt to satisfy user’s needs. An example of a user agent is an interface agent that we will describe in the next section. Services agents provide certain services to general public or are a part of the infrastructure that supports provision of services. Service agents are not owned by any particular user.

      Location is another line of categorization. The agents can be stationary or mobile. A stationary agent runs on a certain node and fulfils its purpose either locally or through communication with others. In contrast, mobile agents are capable of migrating from one location to another and execute within various environments. They have direct access to data in many locations.

      With respect to source of intelligence, agents can be programmable or learning. A programmable agent obtains all of its desires from explicit instructions provided by its creator. A learning, or adaptable, agent can acquire some of its skills. For example, a user agent may be designed to build a profile of the user from the interactions with the user. An agent may also rebuild the library of its plans, if the interactions with the environment justify modifications. A very popular and powerful strategy is reinforcement learning. In this strategy, an agent receives a feedback on the results of the performed task. If the feedback is positive, then the way the task has been performed is reinforced; i.e., it will be more likely used in the future. A number of adaptation techniques can be used. They range from more or less formal methods based on symbolic learning to fuzzy approaches based on neural networks, genetic algorithms, fuzzy sets, etc. They can be used within

    5. Interface agents

    6. An interface agent is a user agent that acts as an intermediary between the user and the rest of the networked world. Main applications of interface agents are in data filtering, information retrieval and personal assistance.

      Data filtering agents use the user instruction and/or user and device profile to reject incoming information that in their opinion does not interest the user. For example, a neural network has been used to determine which news postings should be presented to the user. The outside sources feed vast amounts of data to the user, but only some of it passes the interface agent.

      In contrast to filtering agents, information retrieval agents are proactive. They also limit the amount of data with which the user is presented. They achieve this goal by requesting information in which their owner is interested according to their beliefs. They can be stationary or mobile. A stationary information retrieval agent communicates with remote sources of information. If interesting data is reported, then the agent organizes its delivery. The agent runs on the user’s workstation at all times. A mobile agent can be used instead to move between data sources and check the data locally. A mobile agent does not consume local resources.

      A Personal Digital Assistant (PDA) is an agent that helps its owner with routine tasks. In addition to user profile that includes user’s abilities to perform tasks in certain domain, a PDA has knowledge of that domain and expertise in using it. The expertise can be hard-coded by the creator of the agent, or it can be acquired. For example, a novice agent which is exposed to several users with various degree of experience may use the knowledge that it collected from more experienced user to help others. Generalization techniques can be used, which allow the user to be creative. For example, an agent may notice that a user performs certain tasks that are not necessary to achieve the goal.

    7. Mobile Agents

    8. A mobile agent is an agent that can be move between several locations in a heterogeneous environment. The use of the term agent implies that a mobile agent is characterized by the same features as a stationary agent. In addition to defining the basic agent model, a mobile agent has to comprise a life-cycle model, a computational model, a security model, a communication model and a navigation model.

      The agent model is the same as for the agents that we discussed so far.

      1. Lifecycle model

      2. The lifecycle model describes how the agent is created, initiated, suspended, restarted, stopped, deleted, etc. Usually, an attribute would be used to indicate the state of the agent, which describes a particular phase in the life of an agent. Two types of agents can be distinguished with respect to agent lifecycle. Persistent agents are capable of saving their execution environment, so it can be restored in another location after the agent has migrated. Such agents suspend their execution on one node and restart from exactly same point on another node. Task-based agents restart from the same initialization point at each visited location. They can still organize transfer of certain data along with them, but the execution environment is lost.

      3. Computational model

      4. The computational model refers to computational capabilities of a mobile agent. The model defines the way in which the agent is executed. Additionally, a set of primitives that can affect agent’s execution is usually available. They include data manipulation services and thread management.

      5. Security model
Allowing alien pieces of code to execute carries certain security risks. There is a danger of: Security model provides protection policies regarding security of: A mobile agent has to be protected from being altered at the visited host. It must also be shielded from being intercepted while migrating. An altered mobile agent may constitute a security risk to the visited nodes and to the network. Therefore, security mechanisms that prevent an unauthorized access to host resources are in place. Usually, agents gain access to system resources through specialized interface objects that may implement access control lists.
      1. Communication model

      2. The computational model determines the ability of agents to communicate with other mobile or stationary agents, with hosts and with other service providing distributed objects; e.g. CORBA or RMI components.

        A mobile agent will be exposed to many environments, so it may need several communication models.

      3. Navigation model

      4. The navigational model defines the migration capabilities of an agent. An agent can use default facilities and patterns of the host, or it can implement them. Several services are required for such an implementation. The most important are the services to ship and receive agents. A naming service can be used to discover and resolve migration destinations. An agent should be able to determine whether the remote agent environment can support its migration. It also should be able to suspend its execution before any migration takes place.

      5. Advantages of using mobile agents
In certain cases, the use of mobile agents may have advantages over other implementations. This does not mean that other technologies (like remote method invocation) cannot be used, because virtually all problems that can be solved with mobile agents can also be solved with other technologies. However, the traditional solutions might be less efficient, difficult to deploy or awkward.

The following areas may benefit from appropriate use of mobile agents.