The GNNV Project

Using GNNV


Please note: this document is very incomplete, and really deserves more attention that I have to give to it now; Jay and/or Becky: maybe you guys could work on it some more?...

Compiling the source

(if you've downloaded a statically-linked binary, you can skip this section)


Once you have downloaded the source code tarball; gunzip and untar the downloaded file; either:

tar xvzf gnnv-0.5.1.tar.gz

or:

gunzip gnnv-0.5.1.tar.gz
tar xvf gnnv-0.5.1.tar

depending on your version of tar.

You now have the source tree with directory gnnv/, and subdirectories bin/, include/, and src/ (there are also CVS/ directories, but those shouldn't concern you). Change directories into the src/ directory, and type make (edit the Makefile, if necessary, to conform to your system). Assuming you have gtk+ installed on your system, make should take care of the compilation (if you don't have gtk+, you'll need it to compile the GNNV source). If you don't want to bother with gtk+, you can download the statically linked binary, but currently there is only a Solaris binary available from the Shelley Research Group; however, porting to any Unix-esque platform should be rather trivial.

Getting a sample dataset

You will also need to download and unpack the tarball facedata.tar.gz; this is a collection of pgm files which are distributed with Dr. Jeff Shufelt's neural network code (see http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/faces.html for more information). You'll want to move the downloaded file to the same directory where your newly compiled executable resides (either gnnv/bin/ in the source code distribution, or just gnnv-static/ in the statically linked distribution). Unpack the tarball and you should be ready to run GNNV.

Running GNNV

Run the gnnv executable file in the gnnv/bin/ directory of the source distribution. Please note: currently we are having some colormap issues which aren't allowing us to run gnnv concurrently with netscape. You may want to either print this page, or view it with a web browser which is more friendly about sharing color palettes.

The large black area on the left side of the GNNV window is the network viewer -- when gnnv is working with a network, the user can see a portion of the network here. The white area in the upper right part of the window is the pixelgrid; it is a 30x32 grid which corresponds to the input nodes of the network. The colorbars below the pixelgrid are the legend (they are currently missing labels).

The File drop-down menu at the top of the window provides you with the usual file-handling utilities. Select "Create New Network..." The dialog box which comes up will allow you to define the dimensions of the network and the attribute which you want to train the network to recognize (currently these values are hard-coded at 960 x 4 x 1 and eyes: sunglasses). Select "Ok." Now the network viewer shows the new network which has been created. The nodes are all black because they are initialized to 0.0 (node values may vary between 0.0 and 1.0); the edges connecting the nodes are randomly initialized to values between -1.0 and 1.0. You can use the selection bar on the pixelgrid to select which input nodes are displayed in the network viewer (and consequently which connecting edges are displayed).

Under the Network drop-down menu, select "Test Network." You will be prompted to select an image list upon which to test the network. From the file selection dialog box, select the file sample-images.list (image lists are just text files giving the relative path to the image files) and click "Ok". A series of images will flash on the pixelgrid -- these are the images on which the network has tested itself. Statistics on how well the network performed will be printed to stdout; an untrained network will average about 50% correct.

Now select "Train Network" from the Network menu, and follow the same procedure as for the testing described above. During training, GNNV will pass through the list of image 50 times (epochs); each time evaluating how well it was able to identify each image. A wrong answer will result in the network backpropogating the error back over the connecting edges, adjusting the values of the edges appropriately. The last number in the printout for each epoch is the number of images correctly identified. You should see that number increase and approach (if not meet) the number of images in the image list...


Obviously there are many things which still need to be implemented in GNNV: actual user control of what the network should be trained to recognize, a more user-readable display of statistics from training & testing (printed to a window rather than to stdout), more obvious correlation between the pixelgrid and the input nodes which the user sees displayed in the network viewer (perhaps labeling of nodes?), multi-threading of the testing and training so that the user can manipulate the pixelgrid while testings/trainings are being run, a friendlier tutorial, user-definable network structures, etc...

Yet, GNNV promises to be a worthwhile product for both pedagogical uses and more sophisticated experiments. Hopefully you can imagine, as the Shelley Research Group very enthusiastically does, that in the not too distant future GNNV will be a very useful product capable of demonstrating many different uses of neural networks. Please check the GNNV Project page periodically for updated versions and further information.


Comments regarding GNNV are very welcome and can be sent to: shelley@sun.iwu.edu


Back to the GNNV Project main page.