For the sentence and story parser and cue former, the input layer is on top, the (possible) previous hidden layer and hidden layer in the middle, and the output layer and the target pattern at the bottom. The order is reversed for the answer producer and the story and sentence generators. The word representations at the input and output assemblies are labeled. Each label indicates the word in the semantic lexicon that is the closest to the current pattern. If this word is not the same as the target word (i.e. the output is wrong), the target word is given in parenthesis (e.g. "*boy(girl)*"; The symbol "_" indicates the blank, or all-0 representation; this symbol is only used when the word is incorrect). If the label does not fit the box it is truncated at right.
The labels on the lexical and semantic map units indicate the maximally responding units for each word in the lexicon. The label at top left of the window indicates the maximally responding unit, and also whether this map is currently used as the input or the associative map.
On the episodic memory display, the top-level map is shown at top right, the middle-level maps around it in the top-right quadrant, and the bottom-level maps elsewhere in the window corresponding their position in the middle-level maps (actually, the top and middle level maps are automatically placed over those bottom-level maps that win the least number of input items). The labels indicate the images of the different scripts, tracks and role bindings in the entire 96 story test set. The positive lateral weights that store the traces are shown as lines pointing towards the destination unit of the connection. The length and width of the line indicates the strength of the connection. On the top left of this window, the labels of the maximally responding units at the three levels are shown, and a letter indicating whether this is a stored (S) or retrieved (R) representation.
"Run": click here and DISCERN will start a simulation run, reading input from the default input file. While the simulation is running, the "Run" button changes into a "Stop" button:
"Stop"; you can interrupt the run at any time by clicking on the "Stop" button, and it changes to the "Run" button. Click "Run" again and the simulation continues.
"Step" is a toggle switch; when on, it causes DISCERN to pause after every major propagation in the network. Click "Run" to continue.
"Clear" interrupts the currently running simulation and clears the network activations. After hitting "Run" the simulation continues from the beginning of the current inputfile.
"Quit" terminates the program.
The area to the right of the "Step" button is a command window (see list of commands below). It comes up with "file input-example" as the default command (indicating that the name of the default input file is "input-example"). Anything you type into the DISCERN display will go to the command window. You can edit the text with standard emacs-style commands. Hitting "Return" will send the command to DISCERN.
The display interacts with the X system in the normal manner. You can iconize the display, resize it (unfortunately the fonts are not resizable though), change the default parameters, etc.
"list-params"
Lists the current weight and input file names and various parameters.
"init-stats"
Initializes the performance statistics.
"print-stats"
Prints out performance statistics collected since the last "init-stats".
For each module in paraphrasing and question answering separately, the
percentage of correctly-identifyable words (out of all words),
percentage of correctly identifyable instances, percentage of output
units within 0.15 of the correct value, and the average error per output
unit are printed in the output window.
"clear-networks"
Clears the networks (but does not interrupt a possibly ongoing simulation).
"quit"
Terminates the program.
"stop"
(meaningful only in an input file) Causes the simulation to stop. Click
"Run" (or hit return if the display is not on) to continue.
"withhfm <1/0>"
Whether to include the episodic memory in the simulation runs or use the
output of the story parser directly as input to story generator and
answer producer.
"withlex <1/0>"
Whether to include the lexicon in the simulation runs or use semantic
representation directly as I/O.
"delay <int>"
"babbling <1/0>"
When on, detailed log output will be printed in the standard output.
"print_mistakes <1/0>"
When on, erroneus words (together with the correct word) will
be printed in the standard output even when babbling is off.
"log_lexicon <1/0>"
When on, the propagation in the lexicon (lexical <-> semantic
representations) will be logged in the standard output (if babbling is
on). It is easier to read the output if log_lexicon is off.
"ignore_stops <1/0>"
Do not stop when "stop" command is encountered in an input file (useful
for collecting statistics).
"topsearch <float>"
The search threshold for script level of the episodic memory.
"midsearch <float>"
The search threshold for track level of the episodic memory.
"withinerr <float>"
Statistics collected within e.g. 0.15 of the correct value.
"tsettle <positive integer>"
How many settling iterations in memory retrieval.
"epsilon <positive float>"
If activity of trace map unit A is epsilon larger than B, a positive
connection from B to A is formed.
"aliveact <positive float>"
If the response is oscillating and the lower value is less than
aliveact, consider it a failed retrieval.
"minact <float>"
Lower threshold of the piecewise linear sigmoid approximation.
"maxact <float>"
Upper threshold of the piecewise linear sigmoid approximation.
"gammaexc <positive float>"
Magnitude of the excitatory lateral weight in the trace map (\gamma_E).
"gammainh <negative float>"
Magnitude of the inhibitory lateral weight in the trace map (\gamma_I).
This software can be copied, modified and distributed freely for educational and research purposes, provided that this notice is included in the code, and the author is acknowledged in any materials and reports that result from its use. It may not be used for commercial purposes without expressed permission from the author.
The software is provided as is, however, we will do our best to maintain it and accommodate suggestions. If you want to be notified of future releases of the software, or if you have questions, comments, bug reports or suggestions, send email to discern@cs.utexas.edu. If you want to get more involved in building NLP systems, check out the rest of DISCERN and other software available from the UTCS Neural Networks research group.
Special thanks to Jimmy Jusuf for help in putting together the DISCERN demo and making it into a portable package under X11.
risto@cs.utexas.edu Last update: 1.5 1999/03/05 08:07:59 jbednar