General settings
Rows:
Cols:
Agent starts at X:
Agent starts at Y:
Dirty squares:
random or X1,Y1|X2,Y2|X3,Y3
Blocked squares:
random or X1,Y1|X2,Y2|X3,Y3
Agent class:
ReflexRandomizedAgent
RationalTwoSquaresSimple
RationalTwoSquaresSimpleWithState
AgentWithState
Perception type
Position + Clean/Dirty
Bump sensor + Clean/Dirty
Performance Measure:
Number of clean squares
Number of clean squares, penalize movement
Time till all env clean lower is better
Time steps:
The simulation stops after this number of time steps
Time between actions (ms):
or
Advance Manually
Pre-defined templates
2 locations, suck if dirty, otherwise left right, measure clean squares
2 locations, suck if dirty, otherwise left right, measure clean squares. Penalize each movement.
2 locations, suck if dirty, otherwise left right, measure clean squares. Penalize each movement. Agent has internal state.
10x10, suck if dirty, otherwise move randomly. Measure clean squares. Reflex agent. Measure time till env is clean.
10x10, suck if dirty, otherwise do a search to explore all moves. Able to backtrack. State agent. Measure time till env is clean.
10x10, suck if dirty, otherwise do a search to explore all moves. Able to backtrack. State agent. Measure time till env is clean. Position in perception is replaced with a bump sensor that says only if it managed to move to the next position instead of the actual position. Note that the debug log reflects what the agent thinks its position is, not the actual position.
Time:
Action:
Agent position:
,
Performance: