Project 3: Solving Mazes with A*
Overview
You will develop a series of search heuristics for solving a maze. You will be provided with the code for representing mazes and generating random mazes. You will develop a best-first search implementation, along with several heuristics. Mazes can be of different levels of “perfection”, and they can also contain treasures that must be obtained.
Setup
Clone the Github repository csci335. Push
your own copy to your Github account as a private repository, and add
the instructor as a collaborator.
Programming Assignment:
The files are placed in three packages. Classes you will modify are in boldface.
- core
Pos
: Represents a position in the Maze
.
Direction
: Represents a compass direction. Convenient for relating Pos
objects.
AIReflector
: Utility class that finds all of the heuristics for testing purposes.
- maze.core
Maze
: Represents a maze.
MazePath
: Represents a path through a maze. It can check to see if the path validly connects the entrance and exit.
MazeExplorer
: It represents an “explorer” collecting “treasure”. It does not generate successor “explorers”; you are responsible for implementing this.
MazeTest
: JUnit test class. It is currently failing. If MazeExplorer
is correctly implemented, it should pass testNoTreasure()
and testMany()
. If BestFirstSearcher
is also implemented correctly, it should pass testBestFirst()
.
- maze.gui
MazeViewer
: This is the main GUI class. This is what you will run to test your heuristics.
MazePanel
: Draws the maze.
- maze.heuristics
BreadthFirst
: Example of an implementation of a heuristic. It always returns zero, thus producing breadth-first search.
- Place all of your heuristic implementations in this package, as this is where the GUI will look for them.
- search
GenericSearcher
: Generic search algorithm.
SearchNode
: Object under consideration in the search queue.
SearchQueue
: Interface representing a search queue.
- search.breadthfirst
BreadthFirstQueue
: Example of an implementation of the SearchQueue
interface.
BreadthFirstSearcher
: Extension of GenericSearcher
to produce breadth-first search.
- search.bestfirst
BestFirstQueue
: Implementation of the SearchQueue
interface to produce best-first search. You will need to complete the implementation.
BestFirstSearcher
: Extension of GenericSearcher
to produce best-first search.
- search.example
BestFirstDemo
: Demonstration of BestFirstSearcher
.
BestFirstDemoTest
: Unit test of BestFirstDemo
. Is currently failing. Should pass once BestFirstQueue
is complete.
Once the above code is working, implement the following heuristics. Each one will be a class that implements the ToIntFunction<MazeExplorer>
interface. Each one should be placed in maze.heuristics
:
- A heuristic that uses the Manhattan distance to the goal
- Two additional monotonic heuristics
- One non-monotonic heuristic
Restrictions on heuristics
- All heuristics should be completely deterministic. Any heuristic making use of random numbers will receive no credit, as a heuristic is supposed to correspond to actual information about the path to a solution.
- Any heuristic that fails to make use of actual problem information to estimate the goal distance will receive no credit.
- The maze operates in a Manhattan space. Euclidean distances, while monotonic, offer strictly less information than Manhattan distances, so Euclidean-based heuristics will also receive no credit. This also applies to any other distance metric that consistently yields lower goal-distance estimates than Manhattan distances.
Hints and tips
- To get
BestFirstQueue
working:
- The basic structure will be similar to
BreadthFirstQueue
.
- Use a heap (i.e.
java.util.PriorityQueue
- Think carefully as to how to convey to the queue the node priority. There
are a number of different ways to do this. You can provide a
Comparator
object to the PriorityQueue
, or you could create your own node class that
implements the Comparable
interface.
- To track already-visited objects, use a
HashMap
to store the object (not the SearchNode
,
the generic object itself) as
key and the best heuristic estimate as its value. Make sure to consider
all nodes which improve the best-known estimate.
- To get
MazeExplorer
working, you need to pass two unit tests:
- The first unit test ignores the possibility of treasures. To pass this test, it should suffice to generate one successor for each Direction. Of course, if a given direction is blocked, or if a neighbor would be outside the maze, then no successor should be generated for that direction. The
Maze.getNeighbors()
method ensures that neighbors are within the maze. It does not ensure that it is possible to travel between them.
- The second unit test takes treasures into account. It will be necessary to copy over all of the claimed treasures from the parent node to each successor node. If the successor node contains a treasure, it will also be necessary to record that fact in that node’s set of treasures acquired.
- When devising heuristics, focus on the role of the treasures. How might you use Manhattan
distances related to the treasure locations? Many different heuristics, both monotonic and
non-monotonic, can be devised in this manner.
Experiments
For each heuristic, determine experimentally the “hardest” mazes that it can solve in two minutes or less on your computer. Be sure to specify its clock speed. You will need to determine the “hardness” of mazes in the following categories:
- Perfect (no loops) to imperfect (very few barriers)
- Amount of treasure to be collected
- Size of the maze
For each experiment you run:
- For each maze you generate, run each heuristic on that maze. This ensures that comparisons between heuristics are not biased.
- For each maze, record the amount of perfection, number of treasures, and maze dimensions.
- For each heuristic, record the number of nodes expanded, maximum depth reached, solution length, and effective branching factor.
- If a heuristic takes longer than two minutes, feel free to terminate it and record that, for that experiment, the heuristic took too long.
Paper
When you are finished with your experiments, you will write a short paper summarizing your findings. Include the following details in your paper:
- A description of each heuristic
- Include an argument for its its montonicity (or lack thereof)
- The data you collected from each of your experiments
- An analysis of the data, including:
- A discussion of the relative merits of the heuristics relative to breadth-first search and each other, and
- A discussion as to how varying the size, treasure, and perfection affects the general difficulty of solving a maze by computer
- Discuss the degree to which the results matched your expectations
- Code for each heuristic
Submission
- Post your code on Github in your private repository. Make sure the instructor is added as a collaborator.
- Upload your paper to Microsoft Teams.
Assessment
- Level 1
MazeExplorer
and BestFirstQueue
pass all tests.
- Level 2
- Complete experiments and paper as described above.
- Up to one of the heuristics may be misclassified in the paper.
- Level 3
- All heuristics must be correctly classified.
- After completing the work above as assigned:
- Create and evaluate three additional heuristics.
- Each heuristic must be designed to improve upon one or more of the other
heuristics you developed.
- The manner in which the heuristic is intended to improve upon previous
work must be clearly described in the paper.
- An assessment as to whether the improvement was successful must also
be included in the paper.
- Correctly classify each additional heuristic as monotonic or non-monotonic.