Subscribe via Email

Saturday, May 21, 2016

morris_AIMA [#6]: Randomized Reflex Agent: Completed Vacuum World!

In continuation from the simple reflex agent outlined HERE. I have implemented the follow-up environment using a randomized reflex agent, the demo is shown below. 

Problem Statement 

Given an NxN tiled environment featuring Walls, implement a randomized vacuum-cleaner agent that cleans a square if it is dirty, otherwise randomly select an adjacent tile to move to.


Assumptions

  1. The agent may not pass through walls, or exit the boundaries of the environment. 
  2. Dirt is placed only on tiles not containing a Wall.
  3. The performance measure awards one point for each clean square at each time step, Over the course of T timesteps.
  4. The “geography” of the environment is unknown apriori.
  5. Clean squares stay clean and sucking cleans the current square. 
  6. There are 4 actions the agent may perform. Left, Right, Up, Down. All in which act as desired. 
  7. The agent correctly perceives its location and whether that location contains dirt

Implementation using Morris_AIMA Software

I first tested out the randomized agent on a 2x2 grid.

Given that there is now a wall object. We cannot use a simple reflex agent that will just traverse all tiles, this would result in an infinite loop as it attempts to pass through a Wall. Randomization prevents such infinite cycling, and performs well on this small environment. 

Next, I scaled up the environment to be a 4x4 grid, and tested out the Agent there. 


If you have not already, take notice to the Performance Measure label that is showing progress. The agent is in some cases appearing to halt, when in reality it is randomly selecting invalid moves. These are moves that either put the agent outside of the environment bounds, or cause a wall collision. 

Although a randomized will, probabilistically, always reach every dirty tile, it may not be the most efficient technique anymore. Still in comparison to a simple reflex agent without state, this randomized agent will avoid looping behavior, and is thus better [by assumption 4].

To test the scalability of my software, and the randomized agent. A demo was created for a 10x10 environment. 


It becomes quite apparent in this example just how much the randomized agent struggles. In 200 cycles of the environment, less than half was even explored. Despite a simple reflex agent having the issue if infinite looping, it would be more effective to have a state-based reflex agent for this type of environment. 

A quick change was also made to the SimulationResult object. It will now append various simulation results for the
Vacuum World, and include the time in which it was run on.

Discussion

Despite the simple reflex agent and randomized agent performing similarly on a smaller 2x2 grid, the randomized agent becomes noticeably inefficient on a larger nxn grid. The simple reflex agent however, has the issue of cyclical behavior, locking itself into a loop if walls are encountered. For this reason the randomized agent is more efficient; however a state-based reflex agent would further surpass one that is randomized, as a state-based agent may track tiles and adjust where it moves depending on the updated internal state. 

I'll add the solutions I've collected from chapters 1 & 2 to my github sooner or later, once I'm finished formatting. Currently in Taiwan so busy busy. 

No comments:

Post a Comment

Please feel free to give feedback/criticism, or other suggestions! (be gentle)

Subscribe to Updates via Email