I'm taking an introductory AI class this semester at RPI. In today's class (class number three of the semester), we have just gotten to the point of talking about some of the different approaches to problem solving in AI. Now, this class is structured around the idea of learning to build a rational agent. Therefore, we focused today on developing a goal-based problem solver.
A goal-based agent is designed around the concept of "states". The current position of the agent in its environment, all observable attributes of the environment, etc. comprise the agent's "current state". Every time the agent performsa an action (or any other agents or environmental bodies perform an action in a non-static environment), a new state is essentially created. We work under the assumption (for now, anyway) that all possible states are known. In fact, we work under several assumptions when creating a simple problem solver. We assume that our environment is:
- Static - unchanging except when the agent affects it
- Observable - there are no unknown or unaccounted for variables
- Deterministic - again, we either know all possible states or can at least calculate/predict with complete accuracy the state after a given set of actions
- An initial state
- A set of possible states (or a way to calculate them)
- A set of possible actions
- A goal state
- A metric by which to measure solution desirability, so as to be able to find the "best" solution