Task to Complete
Create a program that uses a neural network to find a line to separate two “half-moons” of randomly generated data (see image).
April 27th, 2017
- I spent this class downloading the neuroph library and the sample XOR code. I tried to test the XOR code, but I wasn’t sure how to get the library properly working.
May 1st and 3rd, 2017
- Because this is the first time I am using libraries in java, I spent both classes figuring out how to link the neuroph library with my code. I had unsuccessfully attempted many methods that didn’t work, like unzipping all the jar files. I finally got the code to compile, but even then, I was faced with a NoClassDefFound error.
May 5th, 2017
- Today, I finally figured out that to use libraries in BlueJ, you simply had to rename the library folder to “+libs”. With this done, I looked through the sample XOR codes (the first one without a learning rule, the second using backpropogation, and the third using momentum backpropogation). I was able to compile and run all three programs successfully.
- After that, I spent some time figuring out how neuroph works. It was pretty self-explanatory, and I found that there was also good documentation online.
May 9th, 2017
- Today, I began working on my neural network double moon project. I was able to reuse most of the code from the last project, including the data generation and the graphics display using StdDraw.
- Some minor tweaks I had to make were changing the data generation to fit neuroph’s DataSet class format. Otherwise, this process was relatively smooth.
- By the end of class, I was able to generate the double moons, display them on the screen, and train a MultiLayerPerceptron to classify points.
May 11th, 2017
- I spent this class working on the display for the separation lines. Because while a perceptron only has one lines, while a neural network can have multiple, I had to rewrite this part of the code.
- To do this, I had to use the weights of each of the neurons in the hidden layer of my perceptron to draw lines.
- I also continued tweaking the learning rate and momentum for the learning rule to make the training faster and with less error.