AI Agents: Forms and Components 1. Simple Reflex Agent Description: Acts only based on the current percept. Does not consider the history of percepts. Works with: Condition-action rules (IF condition THEN action). Limitations: Cannot handle complex environments or situations requiring memory. Example: A vacuum cleaner that turns left if it senses a wall in front of it. IF wall-detected THEN turn-left. 2. Model-Based Reflex Agent Description: Maintains some internal state or model of the world to keep track of past percepts. Advantages: More intelligent than Simple Reflex agents; can handle partially observable environments. Example: A thermostat that remembers if it recently turned the heater on, and uses that to avoid frequent on/off switching. 3. Goal-Based Agent Description: Uses goal information to decide what action to take. The agent needs to know what actions will lead to its goal. Example: Trying new strategies or moves in a game to discover a better option. 4. Utility-Based Agent Description: Chooses actions that maximize its utility function, which measures its "happiness" or performance. Example: A robot navigating a complex environment, choosing paths that minimize travel time and energy consumption. Model-Based vs. Utility-Based Agent Feature Model-Based Agent Utility-Based Agent Decision Understanding the environment, making fast movements and its dynamics. Maximizing utility (the numerical measure of desirability). Internal Representation Internal model of the world. Utility function. Action Selection Predicts outcomes using the model and chooses the best one. Evaluates outcomes using the utility function and chooses the best one. Complexity Handling Good for complex, changing environments if the model is accurate. Good for complex environments with objectives and trade-offs. Maintenance Can be computationally expensive and challenging to maintain. Requires designing a good utility function. Example Robot navigation. Recommendation system, resource allocation. Learning Agents and Their Components A learning agent is an agent that improves its behavior and decision-making ability based on past experiences or feedback from the environment. Sensors Percepts Learning Element Performance Element Problem Generator Actuators Environment Change Knowledge Actions Components of a Learning Agent: Learning Element: Function: Responsible for making improvements in the agent's performance based on feedback or experience. Role: It learns what actions to take and how to adjust strategies. Example: A machine learning algorithm that updates weights in a neural network. Performance Element: Function: Chooses actions based on the agent's current knowledge. Role: This is the part of the agent that actually acts in the environment. Example: A robot's motion controller that moves based on its current understanding of the environment. Critic: Function: Evaluates the performance of the agent. Role: Provides feedback (a score or signal) to the learning element, indicating how well the agent is doing. Example: A scoring system that tells the agent whether its decision led to success or failure. Problem Generator: Function: Suggests exploratory actions that lead to new and informative experiences. Role: Encourages the agent to explore parts of the environment to find new ways of improving performance. Problem-Solving Agent A problem-solving agent is a type of intelligent agent designed to find a sequence of actions that leads to a desired goal state from a given start state using a well-defined problem-solving process. Steps of a Problem-Solving Agent: Goal Formulation: Define the goal the agent wants to achieve. Problem Formulation: Define the problem in terms of initial state, actions, transition model, and goal test. Search for Solution: Use a search algorithm to find a path from the initial state to the goal state. Execution of Solution: Follow the planned action to reach the goal. Example: Route-finding problem (map navigation). Agent Behaviour in Route-Finding Problem: Goal Formulation: Reach City A to City B. Problem Formulation: Define the map and distances between cities. Search: Use a search algorithm (e.g., Dijkstra's or A*) to find the shortest path. Execution: Follow the best route step by step.