Skip to content

Neurithmic-Systems/V-to-mu_Demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

V-to-mu_Demo

Building the java app.

  1. Clone project to a local repo.
  2. The cloned project folder (by default named, V-to-mu_Demo) will have a subfolder, nbproject. The presence of that subfolder will allow you to open the project as a project in Netbeans (probably analogous situation for other IDEs).
  3. Start Netbeans, and click on button to open a project.
  4. Navigate to folder V-to-mu_Demo in the "Open Project" dialog and open the project.
  5. If you have the latest Netbeans, or more specifically, if your JDK is Java 11 or higher, you should be able to compile and run the app.
  6. If not, rt click on the V-to-mu_Demo project (in Netbeans Project window) and select "Properties", which opens the Properties dialog
  7. At bottom of the dialog, click on the "Src/Binaries" pulldown button and select the highest java src you have avaailable. It should run if for at least Java 8 and higher.

In netbeans, when you build, you might see that there are problems with the build, even though it runs. Specifcally, it might be that there are two libraries specified, which you can see if you open the Properties dialog and click on "Libraries". You should be able to just remove the two libs, "AbsoluteLayout.jar" and "wwing-layout-1.0.4.jar". I thought these libs were necessary to be in the distribution, but apparently not. I'll get rid of this comment here in the Readme when I resolve this either way. But for now, just removing the two libs should work for you.

Running the App

When you run the app, the window opens. You see four main panels. Many of the controls or labels have tooltips which explain their operation.
You can also click on the "Instructions" toolbar button and a window with instructions opens. At the most general level, the way you use the app is as follows.

  1. When the app opens there will be a default number, Q, of WTA competitive modules (CMs) in the Mac panel (lower right). And there will be a default number, K, of units per CM. Current defaults are Q=8 and K=4. The lower left panel shows a larger view of the first (leftmost, highlighted in yeloow) CM in the Mac panel. It's shown by itself really just to show what's going on in a single CM more clearly, especially if a lot of units have been added. Whichever CM you click on in lower right panel becomes highlighted and lower left panel immediately updates to show that CM's details.

  2. You can use the K spinner to adjust the number of units per CM. Use the Q spinner to adjust the number of CMs.

  3. You will see that there is an initial distribution of V values for the K units in each of the Q CMs. There is a max-V unit in each CM (black bars) and those Q units all have the same V. That V value is set by the "Max V in each CM" slider. The V values of the rest of the units in each CM, i.e., the non-max-V units, are generated by drawing (uniformly) randomly from the V range specified by the two sliders at top right, "Min Crosstalk Val" and "Max Crosstalk Val". In the V charts in the two lower panels and in the V-to-μ graph at upper left, the red dashed line indicates the max crosstalk value and the cyan dashed line indicates the min crosstalk value. If you play with the "Max V" slider and with the two crosstalk sliders, you can see the effects on the V distributions in the CMs, and consequences on the final probabilities (ρ vals) of winning (in particular, of the max V unit in each CM), and ultimately, on the expected accuracy of recall.

  4. The expected accuracy of recall (green box) can also be interpreted as the expected intersection of the wnning code (the set of Q units chosen from the ρ distributions) with the code of the most closely matching previously stored item. Let me explain. If, e.g., you set the max-V slider to 0.8, you are implicitly presenting an input (which you don't actually see graphically depicted) that is 80% similar to the closest matching stored input (which you also do not see graphcailly depicted), i.e., corresponding to a G value of 0.8, In fact, the "Max V" slider and "Global Familiarity (G)" slider are tied: recall that G is defined as the ave. of the max V's across the Q units. So, in this case, where G < 1, you are effectively presenting a novel input to the coding field. In this case, there are two possible interpretations of the value in the green box. (A) We can consider the current input, which is only 80% simlar to the closest matching stored input, to in fact be an instance of that stored input, but just a noisy (or somehow trasnformed) version of that stored input. In this case, we are effectively considering the system to be in retrieval mode, and the goal to be to reactivate the code of that cloesest matching stored input perfectly, i.e., to activate all Q units comprising that code. In this case, i.e. where the model effectively "knows" that it's in retrieval mode, the optimal strategy is in fact to simply pick the max V unit in each CM. And in this case, the green box is correctly interpreted as expected (recall) accuracy. (B) However, in the general case, where the model is operating autonomously and cannot know whether the current input is truly new or just a degraded/moisy instance of a known input, i.e., all it knows is the G value, then simply picking the max V unit in each CM is no longer appropriate. Think about it. Even if the current input (again, only implicitly defined by the existence of a set of units with the max V value), was only, say, 10% similar to the closest matching stored input, then the Q units comprising the code of that stored input, would all win in their respective CMs (assuming the crosstalk max was lower than 0.1). In this case, we'd be assigning the exact same code to this new input that is only 10% similar to the closest matching stored input that we assigned to that closest matching stored input. The model essentially losest the ability to preserve similarity from input space to code space, which is the essential property underlying intelligence (as well search in general). So, the whole point of transforming the V values to the ρ values (via the intermediate μ values) is to achieve that similarity preservation. So to flesh out this case B a little more...suppose we set max V to 0.5. We're implicitly presenting an input that is 50% similar to the closest matching stored input. In this case, perhaps we want the winning code to have an approximately 50% intersection with the code of the closest matching stored input. The way to achieve that is to create ρ distributions in the Q CMs, whose statistics are such that we expect the unit with the max %rho; (and indirectly the max V) to be drawn in about 50% of the CMs. Similarly, if we set max V slider to 0.8, then perhaps we want to the winning code to intersect with the code of the closest matching stored input in about 89% of the CMs, etc. So, that is why, in the general autonomous case, we cannot simply pick the max-V units as winners, but rather, must transform the V distributions into ρ disributions to achieve the diesred expected intersection with the code of the closest matching stored input. Thus, in this case, i.e., where G < 1, the value in the green box is appropriately interpreted as expected size of intersection with the code of the closest matching stored input.

  5. If you can click on the "Generate New Sample" button, you generate a new pattern of V inputs in all Q CMs.
    In each CM, there will be one randomly chosen unit to have a V value equal to the current setting of of the "Max V" slider (which, again, is tied to the "Global Familarity" slider). And again, in each CM, the other K-1 units will receive a random V value chosen from the range determined by the current settings of the Min and Max Crosstalk sliders. Note that this crosstalk simulates the effect of having stored some number of inputs. But the app doesn't actually explicitly store these inputs. So this is an abstract (approximate) way of simulating some number of prior inputs stored in superposition in the coding field.

  6. When G is near 1 (which must mean there is at least one unit in each CM with a V near 1, hence the tying of the two sliders), that means the input is highly familiar and therefore that we should want the max V cell to win in all (or at least, most) of the CMs, which would correspond to activating the code of the familiar (i.e., previously experienced) input. You can see how playing with the various sliders controlling the transform affects the expected accuracy, i.e., the expected fraction of CMs in which the max V cell wins. On the other hand, when G is near 0 [which means all units (in each CM) have near-zero V values], that indicates that the input is highly unfamiliar, in which case, we should want to assign a highly unique code to the input. Thus, low G causes the V-to-μ transform to flatten, i.e., causing the V values to be compressed toward the same low value, thus yielding a near uniform ρ distribution in each CM, which in turn, leads to the minimum (chance-level) expected intersection of the chosen code to any previously stored codes.

This small java swing app demonstrates the core principle of Sparsey's algorithm for approximately preserving the similarity of inputs to the similarity of their codes. In this case, the codes are modular sparse distributed codes (MSDCs), which are sparse binary codes, and the code similarity metric is just intersection size, since all codes are constrained to have the same weight (i.e., the same number of 1's). The MSDC code is as follows. The coding field (CF) consists of Q WTA modules, each with K binary units, and a code is just a choice of one winner in each of the Q modules. Thus all codes are of weight, Q.

The core principle is extremely simple. All you have to do is add noise proportional to an input's novelty (inversely proportional to G) into the process of choosing its code.

Because of the use of MSDCs, an input's novelty, or to be precise, its inverse, which I call "familiarity" and denote, G, can be computed extremely quickly, in fact, with constant time complexity. G is simply the average of the max V values in the Q CMs. Since the architecture is fixed for the life of the system, the number of steps needed to compute G remains constant as additional inputs are stored.

A unit's V value is just a normalized version of its input summation. Again, since the architecture is fixed, the number of steps needed to compute a unit's V value remains constant for the life of the system, as does the number of steps needed to compute the V values of all Q x K units comprising the CF.

Computing the amount (power) of the noise to be added to the process of choosing winners also takes a constant number of steps, and in fact can easily be pre-computed and stored as a table. Actually adding a noise sample to each Q x K units' V values also has constant time complexity.

The final selection of the winner in a module is done by transforming the V values of the units into prob (ρ) values that reflect the noise and making a random draw from the ρ distribution. This also takes a fixed number of steps (proportional to the log of the number of units, K, in a module).

Releases

No releases published

Packages

No packages published

Languages