Agent’s structure can be viewed as:
- Agent = Architecture + Agent Program
- Architecture = the machinery that an agent executes on.
- Agent Program = an implementation of an agent
Different forms of Agent
As the degree of perceived intelligence and capability varies to frame into four categories as,
- Simple Reflex Agents
- Model Based Reflex Agents
- Goal Based Agents
- Utility Based agents
1. Simple Reflex Agents
They choose actions only based on the current
- They are rational only if a correct decision is made only on the basis of current
- Their environment is completely
Condition-Action Rule − It is a rule that maps a state (condition) to an action.
Example: ATM system if PIN matches with given account number than customer get money.
2. Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal state.
Model − The knowledge about how the things happen in the world.
Internal State − It is a representation of unobserved aspects of current state depending on percept history.
Updating the state requires the information about −
- How the world
- How the agent’s actions affect the
Example: Car driving agent which maintains its own internal state and then take action as environment appears to it.
3. Goal Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more flexible than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications.
Goal − It is the description of desirable situations.
Example: Searching solution for 8-queen puzzle.
4. Utility Based Agents
They choose actions based on a preference (utility) for each state. Goals are inadequate when
- There are conflicting goals, out of which only few can be
- Goals have some uncertainty of being achieved and you need to weigh likelihood of success against the importance of a
- Example: Millitary planning robot which provides certain plan of action to be taken.
Properties of Environment
The environment has multifold properties −
- Discrete / Continuous − If there are a limited number of distinct, clearly defined, states of the environment, the environment is discrete (For example, Cross world,8- Queen ); otherwise it is continuous (For example, driving, flight control).
- Observable / Partially Observable − If it is possible to determine the complete state of the environment at each time point from the percepts it is observable; (for example: Image analysis, Puzzle game) otherwise it is only partially observable (For example: Pocker game, Military planning).
- Static / Dynamic − If the environment does not change while an agent is acting, then it is static;(For example:8-queen puzzle) otherwise it is dynamic(For example: Cardriving, Tutor).
- Deterministic / Non-deterministic − If the next state of the environment is completely determined by the current state and the actions of the agent, then the environment is deterministic (For example: Image Analysis); otherwise it is non- deterministic (For example:Boat driving, cardriving, flight control).
- Episodic / Non-episodic − In an episodic environment, each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode Subsequent episodes do not depend on the actions in the previous episodes.
- For example: blood testing for patient, card games) Episodic environments are much simpler because the agent does not need to think ahead.(Ex:Redinery controller, chess with clock)