Skip to content

Agents & State Spaces

Eric Steen edited this page May 17, 2020 · 17 revisions

State

  • formally, state is a function of history, St = f( Ht )

Three Types

  • Environment

    • Environment State Set is the environments private state representation
    • i.e. whatever data the environment uses to pick the next observation/reward to emit
    • The environment stats is not usually visible to the agent
    • Even if Set is visible, it may contain irrelevant information
  • Agent

    • Agent State Sat is the agents internal state representation
    • i.e. whatever data the agent uses to pick the next action
    • i.e. it is the information used by RL algorithms
    • It can be any function of history:
      • Sat = f( Ht )
  • Information (Markov State)

    • history can be thrown away, current state contains compact representation of history

Needed to pass Turing test:

  • NLP
  • Knowledge Representation
    • state space
    • blackboard (ETS)
    • working memory(short-term, ETS)
    • long term memory (mnesia, db)
  • Automated Reasoning
  • Machine Learning

Agents

  1. What is an Agent? Perceives env through sensors, execute actions using actuators
  2. What is Rational Agent? Always selects an action based on the percept sequence it has received so as to maximize its (expected) performance measure given the percepts it has received and the knowledge possessed by it.
  • An ideal agent always chooses the action which maximizes its expected performance, given its percept sequence so far.

  • An autonomous agent uses its own experience rather than built-in knowledge of environment by designer.

  • The most challenging environments are partially observable, stochastic, sequential, dynamic, and continuous, and multi-agent systems.

Agent Architectures

Reflex Agents (immediate response to percepts): Percept Based

  • Efficient
  • No internal representation for reasoning, inference
  • No strategic planning, learning
  • percept based agents are not good for multiple, opposing goals

State Based (more knowledge, deliberation of percepts)

  • information comes from sensors - percepts
  • changes the agents current state of the world
  • based on state of world and knowledge(memory), it triggers actions through effectors

Goal Based Agents (act to achieve goals)

  • information comes from sensors - percepts
  • changes the agents current state of the world
  • based on state of the world and knowledge(memory), it chooses actions and carries them out through effectors
  • Goal Formulation
    • Agents action will depend on its goal
    • Goal Formulation based on current situation is a way of solving many problems and search is a universal problem solving mechanism in AI
    • The sequence of steps required to solve a problem is not known a priori and must be determined by a systemic exploration of the alternatives.

Utility Based (several goals, some with preference, determines utility of goals to choose from, maximize utility)

  • A more general framework
  • Diff preferences for different goals
  • A utility function maps a state or sequence of states to a real valued utility
  • The agent acts so as to maximize expected utility

Learning Agents

  • learning allows an agent to operate in unknown environment
  • The learning element modifies the performance element
  • Learning is required for true autonomy

State Space

World State - includes every last detail for environment

Search Planning Problem State

Problem=Pathing(find path of pac-man)

Problem=Eat-all-dots(eat all the dots))

  • think of state space as the things that vary only.. things that don’t change don’t belong in state space.
  • state space should contain all requisite info (nothing outside state space, e.g. world state) required for goal test and for defining successor function
  • the state space may be explicitly represented, where all known nodes in system are represented
  • typically it is implicitly represented and generated when required
  • the agent knows
    • the initial state
    • the operators(actions) (computes successor of node)
  • an operator is a function which expands a node. Compute the successor node(s).

State Space Search

The frontier(fringe) is the initial state of the State Space at beginning of search.

  • A state space is a graph (V, E)

    • V is a set of nodes
    • E is a set of Edges
  • Each Edge has a fixed, positive cost

  • Each Node is a data structure:

    • A state description
    • Parent of the node
    • Depth of node
    • the operator that generated this node
    • cost of this path (sum of operator costs)
  • Implicit state space

Basic Search Key Issues:

  • search tree may be unbounded
  • return path, node, or ?
  • How are merge(combine frontier nodes), and select(frontier prioritization) done?
    • Is graph weighted or unweighted?
    • How much is known about the quality of intermediate states?
    • Is the aim to find a minimal cost path or any path as soon as possible?

Search Strategies

Correctness/Completeness, Optimality, Search Cost (Time/Space)

  • Blind Search (without Heuristics)
    • Depth First Search
    • Breadth First Search
    • Iterative Deepening Search
    • Iterative Broadening Search
  • Informed Search (Heuristic Information)
  • Constraint Satisfaction
  • Adversary Search (X Person Games)

Input:

  • set of states
  • operators, [and costs]
  • start state
  • goal state [test]

Output:

  • Path: start -> a state satisfying goal test [May require shortest path]
  • Reduction

Many problems can be represented as a set of states and a set of rules of how one state is transformed into another, agent must choose sequence of actions to reach goal

  1. Each state is abstract representation of the agents env. It is an abstraction that denotes a configuration of the agent.
  2. INITIAL STATE: The description of the starting config of agent
  3. An ACTION/OPERATOR takes the agent from one state to another State. A state can have a number of successor states.
  4. PLAN: A is a sequence of actions
  5. GOAL: a description of a set of desirable states of the world. Goal states are often specified by a goal test which any goal state must satisfy.
  6. PATH COST: path -> positive number, usually path cost = sum of step costs
  7. PROBLEM FORMULATION: choosing a relevant set of states to consider, and a feasible set of operators for moving from one state to another.
  8. SEARCH: is the process of imagining sequences of operators applied to the initial state, and checking which sequence reaches a goal state.

State Space Search Lingo

  • S: The full set of states

  • S-sub-zero: the initial state, member of S

  • A:S- -> S: set of operators

  • G: the set of final states

  • SEARCH PROBLEM: Find a sequence of actions which transforms the agent from the initial state to a goal state.

    • formulate a problem as a state space search by showing the legal problem states, the legal operators, and the initial and goal states
    • a state is defined by the spec of the values of all attributes of interest in world
    • An operator changes one state into the other; it has a precondition which is the value of certain attributes prior to the application of the operator, and a set of effects, which are the attributes altered by the operator
    • The initial state is where you start
    • the goal state is the partial description of the solution

Rationality in agent must take into account the limitations of the agent

  1. Percept Sequence, Background Knowledge, Feasible Actions
  2. Deal with expected outcome of actions beforehand

Agent Environment Dimensions

Observability

  1. Fully Observable - all env relevent to action being considered is observable
  2. Partially Observable - The relevant features of the environment are only partially observable, agent must track & reason about env Ex. Fully: Chess, Partial: Poker

Determinism Types

  1. Deterministic: The next state of env is completely described by current stat and agents action. Eg. Image Analysis
  2. Stochastic: If an element of interference of uncertainty occurs then the env is stochastic. Note that a deterministic yet partially observable env will appear to be stochastic to agent. Eg. Ludo
  3. Strategic: environment state wholly determined by the preceding state and the actions of multiple agents. Eg. Chess

Eposodicity

  1. Episodic Environment: subsequent episodes(phases) do not depend on what actions occurred in previous episodes, current episode does not effect subsequent episodes
  2. Sequential Environment: the agent engages in a series of connected episodes (phases), implies may need to plan ahead based on previous episodes

Dynamism

  1. Static Env: does not change from one state to next while agent is considering its course of action. The only changes to the environment are those caused by agent itself. Passage of time is irrelevant to deliberation. Agent does not need to observer world during deliberation.
  2. Dynamic Env: Changes over time independent of the actions of the agent — and thus if an agent does not respond in a timely manger, this counts as a choice to do nothing. Agent needs to observe world while deliberating

Continuity

  1. Discrete: # of distinct percepts and/or actions is limited, the env is discrete, otherwise it is continuous.

Single/Multi Agent

  1. If the environment contains other intelligent agents, the agent needs to be concerned about strategic, game-theoretic aspects of the environment (for either cooperative or competitive agents).
  2. Most engineering environments don’t have multi-agent properties, whereas most social and economic systems get their complexity from the interactions of (more or less) rational agents.

Environment Complexity

  1. Knowledge rich: enormous amount of information that the environment contains
  2. Input rich: enormous amount of input the environment can send to an agent.

The agent must have a way of managing complexity

  1. Sensing Strategies
  2. Attentional Mechanisms
  3. One or both of above so the agent may more readily focus it’s efforts in such rich environments
Clone this wiki locally