jACT-R Manual

jACT-R

Dependencies

At the most basic level, jACT-R really only relies upon a Java virtual machine that supports version 1.8 or later. The system utilizes many open source projects such as Apache Commons, OSGi, Mozilla's Rhino, etc. In order to manage all of these dependencies, jACT-R also utilizes Eclipse, which is both an application platform and an IDE. Not only is the jACT-R IDE built on top of Eclipse, but Eclipse is also used to manage plugins, modules, other dependencies and launch behavior.

Do you need to use Eclipse to run jACT-R? No. You can run it from the command line, but currently only Eclipse based binary downloads are available and the build system is within Eclipse. If you want command line only, it's usually easier just to install from Eclipse and manually extract the jar files.

So what do you need to download and install before actually installing jACT-R?

  1. Java 1.8+
  2. Eclipse IDE (for RCP/Plug-in development)

Once they have been installed you can go on to the jACT-R installation.

Installation

Assuming you've met all the dependencies, download the latest version of Eclipse for your platform. When choosing a version, select the RCP/RAP developer version, as it provides most of the dependencies in one bundle. 

Start up Eclipse and open the Preferences. Select Install/Update - Available Software Sites. You will Import this listing of dependenices (change extension to '.xml'). This tells the update manager where to look for things like Rhino (Javascript support), and GEF/Zest (for IDE tools). 

Now open Help - Eclipse Marketplace. Search for 'nebula' and select the Nebula Stable Widgets, then Install More. Search for 'jactr' and select all the v2 options (Core, IDE, CommonReality). The non-v2 versions are available for support reasons (do not select). Select Install Now, and you should be good to go.

A note on versions: If you open your update manager preferences, you will see the the update sites for jACT-R look something like: jact-r.org/update2/release/org.jactr.update. Releases will be announced and placed in that path. Developer versions, more frequently updated but w/o any announcements, are available if you replace 'release' with 'dev'.

Quick Start

Do you want to read all the instructions? Well, I don't want to write them just yet. How about a quick start instead?

Importing Existing Projects

[updated 08/09/15]

To import a zipped tutorial like org.jactr.tutorial.unit1.zip in Eclipse, choose Import ... from the File menu of Eclipse. In the Import dialog, keep the pre-selected entry Existing Projects into Workspace and click Next >. Click on Select archive file and Browse ... to the select the downloaded zip archive. Click Finish to import the project into the workspace.

If you came here from a novice tutorial, you may now proceed with the tutorial. You may also continue to get more info on exporting and importing jACT-R projects.

Screencasts

The following screencasts cover how to export and import both zipped and installed jACT-R projects. It also introduces you to run configurations and the creation of custom modules.

Importing/Exporting v1 (19:33 min)
Importing/Exporting v2 (04:50 min)

Note: The tutorials referenced in these (old) videos are not installed by default. You can grab them and similar examples directly from GitHub.

Note: The exporter trims out empty folders. You will need to add the empty java folder at the root of the project. If not, you will receive a error for the project, but it's not critical

Run Configurations

Eclipse uses run configurations to manage and store the necessary options and parameters needed in order to execute programs, this includes running of jACT-R models. This screencast covers the basics of getting a particular model run configured as well as importing of models.

Iterative Runs

The following screencast illustrates how one can run batch model executions. This is useful for data collection as well as parameter space searches. As with the run configurations, listeners can be attached in order to influence and record the model behavior.

Remote execution

Note: The below references deprecated tools. A new system for farming remote model runs is in the works. 

The remote execution tools permit the execution of iterative runs over the network. It can be run both locally and remotely, with auto-start options. The screencast below illustrates how to use this new feature. But first, it has to be installed:

Best Practices

jACT-R modeling and development will usually go smoothest if you follow some of these best practices. No one size will fit all, but these are some basic foundations that we've found to be the most useful.

 

General

  • Name your projects using the java package reverse-URL convention (e.g., org.jactr.core). You can provide a human readable name later, but for the top level use this convention. It makes tracking origin much easier, but has deeper JVM/OSGi consequences.
  • Do not copy projects like templates. There is a large amount of hidden meta information that can be confused.
  • Separate your experiments, modules, extension, and models into separate projects. This helps readability, testing, and makes reusability dramatically easier.

 

Structure

Within jACT-R there is are clear functional distinction between various elements. At the highest level is the model itself. Contained within the model can be both modules and extensions. Modules contribute functionality to the model in a theoretically justified manner such as the visual, motor, or declarative modules. Extensions are atheoretic contributions that are often used for integration or computational purposes. Both modules and extensions are embedded within the model file and can contribute their own model elements (e.g. chunks, productions, buffers). There are also instruments which are elements that are attached to models but do not effect the execution of it. Instruments include loggers, tracers, recorders and probes. Instruments are contributed and configured via the run configuration. Finally there are the sensors which allow you to interface simulation or realtime systems. Sensors provide an abstraction (via CommonReality) to percept and effectors. Using this abstraction opens the door to other architectures being able to interface the systems (as opposed to coding the interface directly to jACT-R).

Tutorial

These tutorials are meant to augment the standard tutorial content from the canonical Lisp implementation.

The tutorials have been moved to here.

Chunks

Chunks are the declarative glue that holds a model together. They are the explicitly inspectable knowledge representation (as opposed to productions). At the architectural level, the declarative module is responsible for their creation, indexing, and management. At the model level, they are created when a new representation is added to a specific buffer and encoded on removal (usually).

Copying

Copying a chunk is frequently needed by buffers that copy on insertion. This is delegated to the IDeclarativeModule. Note: You cannot copy chunks from one model into another. That's a theoretical brain transplant.

 IModel model = ...
 IDeclarativeModule decM = model.getDeclarativeModule();
 IChunk toCopy = ...
 IChunk copied = decM.copyChunk(toCopy).get();

Creating

  IModel model = ...
  IDeclarativeModule decM = model.getDeclarativeModule();
  IChunkType chunkType = ...
  String chunkName = ...
  Future result = decM.createChunk(chunkType, chunkName);
  IChunk newChunk = result.get();

The IDeclarativeModule explicitly supports asynchronous action. That is why all the methods return futures and not the relevant type directly. If you don't care about asynchrony, just call the future's get() method. Otherwise, separate the request from the harvesting to give other threads a chance..

Encoding

 IModel model = ...
 IChunk chunkToEncode = ...
 IDeclarativeModule decM = model.getDeclarativeModule();
 Future result = decM.addChunk(chunkToEncode);
 
 IChunk encoded = result.get();
 
/*
 Merging might mean that the returned chunk is not the exact same reference,
 however, both IChunk objects will point to the same content
*/

 boolean alwaysTrue = encoded.equalsSymbolic(chunkToEncode);

 boolean trueIfNotMerged = encoded == chunkToEncode;

Modifying

Chunks once encoded cannot be modified, only copied (IDeclarativeModule.copyChunk()). However, if it hasn't been encoded, there are two considerations. If you have just created the chunk and no other buffers or modules might have access to it, you can merely manipulate it directly:

 IChunk chunk = ...
 ISymbolicChunk sc = chunk.getSymbolicChunk();
 
 for(ISlot slot : sc.getSlots())
  ((IMutableSlot) slot).setValue(...);

//the same applies for parameters

However, once the chunk is accessible in the system (added to a buffer, referenced by another chunk, etc), you need to be concerned about other threads accessing it. If the chunk is in a buffer, the easiest way to do this is with ChunkUtilities and its associated IChunkModifier interface. The utility class handles the locking and execution time for you.

  IModel model = ...;
  IActivationBuffer goalBuffer = model.getActivationBuffer(IActivationBuffer.GOAL);
  ChunkUtilities.manipulateChunkLater(goalBuffer, new IChunkModifier(){
    public void modify(IChunk chunk, IActivationBuffer buffer)
    {
       ISymbolicChunk sc = chunk.getSymbolicChunk();
       for(ISlot slot : sc.getSlots())
        ((IMutableSlot) slot).setValue(...);
    }
});

Otherwise, you should use the chunk's write lock to ensure no contention.

 IChunk chunk = ...
 Lock lock = chunk.getWriteLock();
 try
 {
   lock.lock();
    ISymbolicChunk sc = chunk.getSymbolicChunk();
       for(ISlot slot : sc.getSlots())
        ((IMutableSlot) slot).setValue(...);
 }
 finally
 {
  lock.unlock();
 }

Modules

Theoretical functionality in jACT-R is defined and extended through the use of modules. As modules are developed and refined, their specification and details will be added here.

Declarative

The declarative module interface defines the methods necessary to asynchronously create and encode chunktypes and chunks. It also includes methods for disposing of temporary chunks (or cleaning up chunks during model disposal), and both partial and exact search methods.

An abstract implementation is provided that handles all of the basics (including event firing) requiring implementations of the actual create, add, copy, dispose and search methods.

There are specific extension interfaces for theory revisions four and five.

The default implementation conforms to theory revision six.

AbstractDeclarativeModule

The AbstractDeclarativeModule provides most of the nitty-gritty implementation details that are required for the IDeclarativeModule. However, it leaves the theoretically relevant components (creating, encoding, searching of chunks/types) to the client.

DefaultDeclarativeModule

The DefaultDeclarativeModule is the general starting point for any theory specific implementation. It is thread safe, provides all the required functionality, and also provides access to the three most commonly required extension points. Specifically, you can provide it with:

  • IDeclarativeNamer to specify the names of chunks/types on encoding
  • IDeclarativeInstantiator to instantiate the specific class of chunk/types required by the module (defaults to DefaultChunk5 and DefaultChunkType5)
  • IDeclarativeConfigurator to configure the instantiate chunk/types (useful for adding non-default parameters, etc).

Version 6

Default declarative module for ACT-R 6. Carries with it some reminents from ACT-R 5. (namely support for similarities). This module uses the IDeclarativeConfigurator to set the various activation equations used by the chunks (i.e., IBaseLevelActivationEquation, ISpreadingActivationEquation, IRandomActivationEquation)

Parameters

  • EnablePartialMatching : default false.
  • ActivationNoise : default 0. Note: a random module must be installed to use.
  • PermanentActivationNoise : default 0. See note above.
  • BaseLevelConstant : default 0.
  • MismatchPenalty : default 1.
  • MaximumSimilarity: default 0
  • MaximumDifference : default -1.

Associative Linkage

Associations in jACT-R are handled by IAssociativeLinkageSystem, which is handled by the Declarative module (often pulling the linkage system from a declarative learning module).

The linkage system provides methods for creating links, accessing their respective equations (for updating values) and a parameter handler to transform strings into lists of associative links. In conjunction with their declarative learning module, they will often install event listeners in order to install and update the links correctly.

Declarative Learning

Declarative learning is generally accomplished by an IDeclarativeLearningModule. Since the learning theories evolve faster than the rest of the architecture, there are significant differences between the various implementations and their expectations.

Version 4

DefaultDeclarativeLearningModule4 provides the ACT-R 4 style of base-level learning and associative learning. It expects that the chunks created by the installed declarative module support the ISubsymbolicChunk4 interface, as well the associative linkage system to be the DefaultAssociativeLinkageSystem (which creates Link4 associative links).

Note: Theoretically outdated. Limited support available.

Parameters

  • BaseLevelLearningRate : Decay rate of base level activation (values: numeric>0, NaN (off). default: 0.5)
  • AssociativeLearningRate : Carry over from prior estimates to the current (values: numeric >=0. default 1).
  • OptimizedLearning : Number of reference times to retain in the calculations (values: 0-100. default: 0) Note: 0 implies no optimization, that is, all references are retained.

Version 6

DefaultDeclarativeLearningModule6 extends the version for 4 by replacing the associative linkage system and providing an additional parameter for the MaximumAssociativeStrength.

Parameters

  • BaseLevelLearningRate : Decay rate of base level activation (values: numeric>0, NaN (off). default: 0.5)
  • MaximumAssociativeStrength : Smax (values: numeric >=0. default 1).
  • OptimizedLearning : Number of reference times to retain in the calculations (values: 0-100. default: 0) Note: 0 implies no optimization, that is, all references are retained.

Imaginal

The imaginal system is used to store intermediate problem state representations. Typically used in conjunction with the goal module which retains the goal state.

The imaginal system is time dependent (unlike the goal module), so all imaginal buffer operations take some (configurable) amount of time.

Version 6

DefaultImaginalModule6 is the default implementation of the imaginal system.
It provides the imaginal buffer, and has parameters to manipulate the amount of time add and modify requests take.

Parameters

  • AddDelayTime : How much time does it take to add a new representation to the imaginal buffer (values: numeric>=0. default:0.2)
  • ModifyDelayTime : How much time does a modification take (values: numeric>=0. default:0.2)
  • RandomizeDelaysEnabled : are the delays stochastic? (values:true/false. default: false) (Note: requires random module to be installed)

Intentional (goal) module

The DefaultGoalModule6 provides a simple immediate buffer and little else.

Perceptual Modules

Aural Module

(exists, documentation pending)

Motor module

DefaultMotorModule6 provides the functionality of the manual module in the lisp, but generalizes it over all possible muscles.

(exists, documentation pending)

Visual Module

DefaultVisualModule6 provides most of the visual functionality found within the lisp equivalent. It provides a fully extensible system for adding new feature maps (for visual searches), visual encoders (for converting percepts into chunks), and filters (for search time prioritizing). The time it takes to find or encode visual objects is also configurable.

Those wishing to add new functionality to the system will typically implement the following delegate interfaces or their abstract implementations:

  • IPerceptualEncoder or AbstractVisualEncode : Provides methods to determine whether or not a particular percept (IAfferentObject) is relevant. If so, it is responsible for creating the chunk, updating it, and marking it as invalid if it changes too much.
  • IFeatureMap, AbstractVisualFeatureMap, or AbstractSortedVisualFeatureMap : Provides a new feature map to be searched by the visual system. The class is responsible from pulling features out of a percept, indexing them, and providing search results.
  • IIndexFilter, or AbstractVisualLocationIndexFilter : Responsible for normalizing requests (i.e., resolving module dependent values like nearest=current), defining a weight (relative to other filters) for prioritizing, and filtering out other visual-locations.

To add any of these, merely provide the full class name of the implementor as a parameter name, and set its value to true. Setting the value to false will remove the named class.

Significant Divergences

(from the lisp)

  • visual-location.screen-x/-y are in visual degrees, not pixels.
  • origin of the visual field is in the center, increasing up and to the right (as opposed to the screen based upper-left corner, increasing down and right)
  • object tracking is automatic, so long as the percept does not exceed movement tolerances.
  • visual errors will set the visual buffer's error slot to provide more detailed information as to why the failure occurred. (error-nothing-matches, error-nothing-available, error-changed-too-much, error-no-longer-available, error-invalid-index)

Parameters

  • VisualEncodingTimeEquationClass: class name of implementor of IVisualEncodingTimeEquation. (default: DefaultEncodingTimeEquation).
  • VisualSearchTimeEquationClass: class name of implementor of IVisualSearchTimeEquation. (default: DefaultSearchTimeEquation)
  • EnableVisualBufferStuff: should new visual locations be stuffed into the buffer w/o a search. (default: false).
  • VisualFieldWidth: width in degrees of the visual field (default: 140)
  • VisualFieldHorizontalResolution: How many visual degrees separate unique visual-locations (default : 1)
  • VisualFieldHeight: height in degrees of the visual field (default: 90?)
  • VisualFieldVerticalResolution: How many visual degrees separate unique visual-locations (default : 1)
  • NumberOfFINSTS: how many finsts are available (default: 3)
  • FINSTDurationTime: how long does a finst last (default: 4)
  • NewFINSTOnsetDurationTime: how long does a new finst remain marked as new (default: ?)
  • MovementTolerance: How many visual angles must a percept move in a single interval before it has moved too much. (default: ?)
  • EnableStrictSynchronization: force the architecture to process all incoming perceptual messages before attempting to do a search or encoding. (default: false)

Vocal Module

(exists, documentation pending)

Procedural

The procedural module is the second primary module that all models must have (the other being declarative). It is responsible for managing all productions in the model.

Version 6

DefaultProceduralModule6 provides the primary procedural functionality. The most common points of extension are in conflict set assembly, production instantiation and selection. Each has its own delegate interface (IConflictSetAssembler, IProductionSelector, IProductionInstantiator). This implementation is also responsible for enforcing strict harvesting. For any buffer that has strict harvesting enabled (IActivationBuffer.isStrictHarvestingEnabled()), the module checks to see if the production matching on that buffer also includes a remove. If not, one is added automatically.

The default production selector will choose the production with the highest 'ExpectedUtility', if that is not defined (i.e. no procedural learning module is installed or enabled), it will use 'Utility'.

Parameters

  • NumberOfProductionsFired : How many productions have fired up to this point.
  • ExpectedUtilityNoise : Noise parameter for utility (values: number>=0. default : 0).
  • DefaultProductionFiringTime : How much time a production takes to fire (values: numeric>0. default: 0.05)

Procedural Learning

Procedural learning has undergone much evolution since the early days of ACT-R.

Version 6

DefaultProceduralLearningModule6 provides access to production compilation and utility learning. Utility learning utilizes the IExpectedUtilityEquation. Production compilation delegates to the IProductionCompiler. The module is responsible for propagating rewards backward in time. Utility learning is fully functional, and can even support the learning of utility noises (UtilityNoiseLearningExtension). The 'Reward' parameter determines how a production participates in the utility learning. A numeric value will result in that production starting a chain of rewards back propogating after it fires. A value of 'default' or NaN, will mark the production as participating the utilty learning, but not starting it. 'skip' or -Infinity will result in the production being skipped, but utility learning continues to propogate. Finally, you can mark a production with 'stop' or Infinity to halt reward processing at this production (not giving any credit to it or prior productions).

Production compilation, while implemented, is based directly on the lisp version and is not as extensible as desired. The long term goal is to get it using the ICompilableContext which describes buffers in terms of their critical properties for production compilation.

This implementation also has the ability reward productions selectively based upon what buffers they act upon. Obviously, we want to reward productions that act on the goal and imaginal buffers, but there exist a class of productions where rewards are less relevant. Goal-free or reflexive productions are often used for basic model behavior that exists below the intentional level. By limiting the productions to be rewarded to those that access the "IncludeBuffers" parameter, you can exclude some productions from rewards, without effecting the rest of the reward chain.

Parameters

  • ExpectedUtilityEquation : class name for the implementor of IExpectedUtilityEquation (default : DefaultExpectedUtilityEquation)
  • ProductionCompiler : class name for the implementor of IProductionCompiler (default: DefaultProductionCompiler6)
  • EnableProductionCompilation : should comilation be used (values:true/false. default:false)
  • OptimizedLearning : number of time references (for the production) to be retained. 0 means retain them all. (default: 0)
  • ParameterLearningRate : Discount applied to utility learning (values: numeric >0, NaN (off) default: NaN)
  • IncludeBuffers : a list of buffer names (coma separated) that productions must match/manipulate in order to be considered for rewarding. (default: "goal, imaginal, retrieval")

Random

IRandomModule provides a centralized location for the generation of random numbers. Any module that relies upon randomness should check for the existence of a random module (e.g. (IRandomModule) model.getModule(IRandomModule.class)) and use the returned instance. If no random module is installed, random behavior should not be enabled. This allows one to easily enable/disable random behavior. By centralizing the random generation, you can more easily guarantee repeatable behavior for a given random seed.

There is a single default implementation.

Default

DefaultRandomModule

Parameters

  • RandomSeed : default automatically generated.
  • TimeRandomizer : default 3. Sets the seed for the time randomization used for the equal distribution (time * ((randomizer-1)/randomizer))

Retrieval

The basic retrieval module template. All retrieval module implementations must support the parameter RetrievalThreshold, which defines the minimum chunk activation value that a chunk must have to be accessible.
This module class also provides access to the model-wide retrieval time equation. And an asynchronous retrieval method. Note: the retrieval method need not actually be asynchronous, but support is provided at this level since retrofitting is such a pain.
There is currently a single default implementation of this module.

Version 6

The default implementation provides access to the basic retrieval functionality, as well as exposes the parameters defined in 4.0 (LatencyFactor,LatencyExponent) and permits partial matching (if the declarative module's PartialMatchingEnabled is true).
Since it can be so useful for those that aren't completely concerned with retrieval behavior, the module also permits the use of indexed retrievals. If you already have a reference to a specific chunk (say as a slot value), you can insert it directly into the retrieval buffer to access it on the next cycle. To enable this, set the EnableIndexedRetrievals to true. Otherwise, +retrieval> =chunkRef will merely create a pattern based on that chunk and search for it, as is the canonical behavior.
Unlike most other buffers, the retrieval buffer provided by this module does not copy the chunk on insertion. It won't permit you to make modifications (=retrieval> slot newValue) will always fail. This was done to prevent unnecessary copying. Reference counts/access times are still consistent.
Actual retrievals are merely delegated to the declarative module. If the declarative module is asynchronous, so too shall the retrievals.
The module now supports parameter overrides when specifying a retrieval. These overrides hold for the duration of that single retrieval only. See below for options.

Parameters

  • RetrievalThreshold : default 0. -Infinity to disable entirely. If -Infinity is used, 0 will be used for the retrieval time equation so that failed retrievals don't take infinite time to complete.
  • LatencyFactor : default 1.
  • LatencyExponent: default 1. (unpublished parameter).
  • EnableIndexedRetrievals : default false. If true, a direct reference to the chunk is immediately retrieved.
  • FINSTDurationTime: default 3 seconds.
  • NumberOfFINSTs: default 4 chunks.

Meta-slots

  • :recently-retrieved : Test for whether this chunk has been recently retrieved (values: true/false). Can be used at request (RHS) or matching (LHS).
  • :retrievalThreshold : Temporarily use the specified retrieval threshold (any number, NaN, (-)Infinity)
  • :partialMatch : Temporarily enabled/disable partial matching (true/false)
  • :indexedRetrieval : Temporarily enabled/disable indexed retrievals (true/false)
  • :accessibility : determine which activation value to use for the retrievability of a chunk ("summation", "base level", "spread", "max", "min", default : "summation")
  • :retrievalTime : determine which activation value to use for retrieval time calc (same as :accessibility)

Resetting declarative finsts

The retrieval module only resets declarative finsts with a clear request. 

In Lisp parlance:

+retrieval>

isa clear

all true

Extensions

Extensions provide non-theoretic functionality to models. They may contribute chunks, types, and productions - but do not normally contribute buffers. Below is a listing of some of the existing extensions, what they do and where they can be found.

Performance extensions

These extensions are used to tweak the performance of model runs.

Optimized conflict resolution

DefaultProceduralModule6 has support for pluggable components that influence conflict set generation, production instantiation, and instantiation selection. This enables one to optimize some of the most expensive parts of cycle execution without regard to the underlying storage mechanism. By default these methods are fairly stupid. The selector just takes the first one (assuming they are sorted by utility). The instantiator just attempts a blind instantiation of all possibilities given the current buffer contents. The conflict set generator simply grabs all the potentially relevant productions (based on the chunktypes in buffers).

Given that performance profiling has shown that instantiation is actually the most expensive phase (given that so many productions will fail), it was an obvious target for some optimizations. The org.jactr.extensions.cached.procedural.CachedProductionSystem tracks and caches instantiation failures. When the condition that resulted in the failure changes, the cache is invalidated, and the next time the production is in the conflict set, it will attempt to instantiate it. If at instantiation, there is a cached failure for that production, it is returned instead. All of this was done by just providing a new IProductionInstantiator.

How much of a performance improvement can you get? Well, that depends on nature of your model, but a ballpark figure is anywhere between 35-90% of an improvement in the real cycle time. However, if you're model has very few productions or is retrieval bound (i.e., you go from retrieval to retrieval, more so than other productions), you'll be on the lower end. We have yet to see a circumstance where the overhead results in slower runs.

How can you tell? Turn on profiling.

Poor-man's Profiling

jACT-R has some basic performance profiling built in. To enable it, add -Djactr.profiling=true to the VM arguments in your run configuration. Be sure to remove all instruments and IDE tracers when running. When the model finishes running, you'll see a print out like this:

 

Total actual processing cycles 13057

Simulated processing cycles 13057

Total actual time 13.460383s

Simulate time 499.0947763613296s

Average sleep time (wait for clock) 0.26255740215976103ms

Average time processing events 0.07167572949375814ms

Average production cycle time 0.7668560925174236ms

Average production time + waits 1.0308940032166656ms

Realtime factor 37.07879459011899 X

If you enable the CachedProductionSystem (more on that in a bit), you can expect something like this:

 

Total actual processing cycles 13163

Simulated processing cycles 13163

Total actual time 10.886717s

Simulate time 503.54204421495893s

Average sleep time (wait for clock) 0.312651371267948ms

Average time processing events 0.06956552457646432ms

Average production cycle time 0.5131492061080302ms

Average production time + waits 0.8270695889994683ms

Realtime factor 46.25288268400463 X

Of particular interest is the change in average production cycle time. This includes conflict set assemble, instantiation, selection, and the posting of new events. The improvement here was only 35%, but that is because this particular model has many productions that compete (on average each cycle has 3 productions) - the fewer competing productions you have, the bigger your performance improvement.

Using Extension

The CachedProductionSystem is included in the default org.jactr.tools bundle. To enable it, include the following in your extensions block (which is after the modules block):

      
    <extensions>
....   
     <extension class="org.jactr.extensions.cached.procedural.CachedProductionSystem">
        <parameters>
          <parameter name="EnableCaching" value="true" />
          <parameter name="ValidateInstantiations" value="false" />
        </parameters>
      </extension>
 ....
    </extensions>

Caveat:

When you first use it, set ValidateInstantiations to true. This will perform the normal caching operations and still attempt to instantiate the production. If there is a discrepancy, an error will be logged. This is just to verify that it does work for all cases, until a more formal test can be devised (at which time this code will be rolled into the main distribution).

Instruments

Instruments are classes that are intended to inspect running models in some way. They are to be passive, in that they should have no influence on the model's behavior. Typically they are used for logging, tracking, or sending data to file.

Default Model Logger

This instrument intercepts the log messages and sends them to a named file. After the file reaches a maximum size, it is backed up. The number of backups can be configured. By default, this instrument attaches to all the models in the runtime. When configured programmatically, you can specify which log streams go to which file. In the IDE, all the streams go to a single file.

Parameters

  • MaxFileSize: in megabytes before the file is backed up and a new file created.
  • NumberOfBackups: the number of backups to retain.
  • all: what filename to send all the streams to.

Model Recorder

Instrument that records the runtime's models at the start and stop of execution. The format of the saved models can be configured to use any of the installed syntaxes (e.g., jactr, lisp). The location of the files can also be specified.

Parameters

  • SaveAsExtension : extension of the code generator to use. (default: jactr)
  • StartDirectory : local working directory path to save the start state models to. Will be created if missing. (default: start/)
  • StopDirectory : local working directory path to save the stop models to. Will be created if missing. (default: stop/)
  • TrimModuleContributions : Don't include the injected content from modules and extensions in the generated code. (default: true).

Runtime Throttler

Whether you are tracing or trying to be nice to other programs, sometimes models just run too fast. The runtime throttler allows you to specify a minimum real cycle time for the models. If the model runs too on that cycle, the model thread will be put to sleep for the remainder of the time. The default value of MinimumCycleRealTime is 0.05, i.e. real time.

XML Model Logger

An alternative logger that routes messages to an xml file. This logger attaches to all the models in the runtime.

Parameters

  • FileName : the xml file to save to.

CommonReality

What is CommonReality, Sensors, and Agents?

Cognitive modeling is increasingly focusing on embodiment, not only to improve model fidelity, but because additional affordances and cognitive short-cuts. In ACT-R this is accomplished via the perceptual/motor modules (i.e. visual, aural, etc). But that perception on the cognitive side. We still need a means of accessing the sensation. Where as the Lisp implementation uses devices, jACT-R takes a step further back.

CommonReality

CommonReality is a simulation management system. It was built to provide a general mechanism to manage agents (i.e. models, actual humans, AI systems) and sensors (i.e. simulation systems), their communication, time management, and control. It was designed with the intent that all these various pieces could be running anywhere, on any hardware. Network latency and the added layer of abstraction do limit simulation speed, but I believe the added flexibility is worth it.

At the heart of CommonReality is a system that acts as a central broker. It controls the clock (which can be managed in a configurable manner), control flow and state changes, and communication among participants (but point-to-point is possible).

Agents

Agents are the cognitive entities in a simulation. They are the consumers of afferent objects, aka sensations, generated by a sensor for it. Agents can also make use of a combination of efferent objects and their commands which permit the agent to execution actions in the simulation (or reality). The agent handles the communication with CommonReality and the translation of afferent and efferent objects for the cognitive model, in this case jACT-R.

The structure of afferent objects is basically just a unique identifier and a series of properties. The set of properties is modality specific. There are recommended properties for visual and aural modalities. Sensors are free to implement the recommended properties, but may supply additional perceptual information. As the system grows and evolves, a discovery mechanism will be developed, but for now it requires consideration one the sensor and agent sides.

Sensors

Sensors provide a perspective on some element(s) of a simulation to the agents it is aware of. Most sensors will provide information to all agents, but this is not strictly required. For any sensation, the sensor will request new afferent objects to represent the sensation to each of the agents. It will then update those objects and communicate those changes. When the sensation is no longer available, it will be removed. Some sensors may provide efferent objects, and templates to create proper commands to control them. There are a handful of sensors already available, and hopefully with collaborative effort there will be many more.

The XMLSensor merely translates a configuration file into raw afferent objects. It does so based on timed or programmatic events. Because it can provide any afferent object, it is ideal for simplified testing. However, more complex situations require other options.

DefaultAuralSensor provides a rough skeleton for programmatically providing auditory information. This is useful as a starting point for other sensors. DefaultSpeechSensor serves a similar purpose.

The DefaultKeyboardSensor provides hands and fingers to models that can then control the actual keyboard (by using java.awt.Robot). Mouse support is on the to-do list.

Finally, there is an interface for playerstage that is continually evolving.

A framework exists for aiding in the development of more complex motor control sensors. Best practices for developing sensors is still evolving, but the provided sensors will get you started.

A tutorial is available covering development with CommonReality. It can be found in the tutorials section.

Divergences

Here I list out the general differences between the canonical and the Java implementations of ACT-R. Since jACT-R has been coded largely in isolation from the canonical code, there are many design decision differences.

I am fully committed to theoretical compatibility, but not implementation. In other words, equations and key behaviors are the primary goal (i.e. activation equations and chunk immutability), not consistency in trivial areas (i.e. exact chunk structures, slot names, or visual search priorities). But, I will endeavor to ensure minimal hassle when migrating models from one version to another (i.e. automatic translation/transformation).

Productions

Overwrite action

Canonical ACT-R supports an overwrite buffer operation that performs an end-run around the managing module and also prevents the encoding of previous contents. =goal> =chunk will replace the contents of the goal buffer with =chunk without encoding the current contents.

jACT-R does not directly support this operation. You can set, add, remove, modify or request, and that's it. You cannot circumvent the encoding. The best you can do is use the set operation which will move a chunk from one buffer to another (in 0 time). But the prior contents will still be encoded. The Lisp parser will tranform overwrites into sets automatically and notify you that this has been done.

Explicit state reset

If a retrieval fails, the only way to reset the state is to complete a successful retrieval. There is no mechanism to explicitly reset the state. While some modules do have this (+visual> isa clear), it is not standard. jACT-R solves this by using an explicit remove operation (-retrieval>). If there is nothing in the buffer, it will have its states reset.

Retrieval module

Indexed Retrievals

Retrieval module does permit indexed (i.e. immediate) retrievals of chunks you already have references to. However, this must be explicitly enabled with the EnableIndexedRetrievals parameter.

Asynchrony

Retrievals can also be performed asynchronously if the declarative module is asynchronous.

Declarative FINSTS

Declarative FINSTs are controlled by two parameters: FINSTDurationTime (3 seconds) and NumberOfFINSTs (4 chunks). :recently-retrieved (null, false, true) can be used in retrieval requests.

Partial Matching

The canonical ACT-R partial matching allows completely mismatching chunks to be retrieved, assuming they have the highest activation. jACT-R requires at least one feature in the retrieval spec to match. This is a functional, not theoretic, design decision. This matches better with the modeler's expectations and is also more performant (not requiring testing against all of DM).

Visual System

Coordinate System

Quick, draw a two-dimensional graph. Did you put the x-axis on the bottom, increasing to the right with the y-axis on the left increasing upwards? You sure didn't draw it with y-axis decreasing downwards. Now draw a graph with a nature center point. Yeah, you get the picture.

Canonical Lisp uses a computer screen based coordinate system, +x to the right, +y down. While the slot names are the same (for now), the coordinates in jACT-R are retinotopic visual angles (degrees). 0,0 is at center, +x to the right, +y up.

Recycled Locations

From an informal survey of modelers, I found that almost none of us use the visual-location chunks for anything other than to encode some object. Because of this limited use and the fact that during the lifetime of a model there can be thousands of these chunks created, jACT-R actually recycles visual-locations.

There is at most one visual-location for each possible retinotopic position (as defined by the visual field size and resolution). Its x and y locations are fixed, but the remaining slot values are mutable by the system. The mutable slot values have a relatively limited lifespan. They are valid only after a visual search until the next visual search is started. In this way the explosive growth of visual locations is eliminated, which is particularly important when operating for long periods in a truly embodied environment.

FINSTS

Canonical ACT-R marks FINSTS at the visual-location level. This makes sense assuming that the visual scene is limited. However, if you've got multiple objects at the same location (due to visual resolution issue or overlap), you can easily end up missing objects and can possibly end up in a situation where you can never actually encode some objects.

To prevent this, jACT-R assigns FINSTS at the object level. In simple scenes, the behavior is exactly the same. In more complex scenes (with overlapping objects), this allows the visual system to differentiate between attended and unattended objects at the same location. The semantics are exactly the same for making a visual-location request.

The only incompatibility this introduces is with the visual-location buffer query. Canonical ACT-R supports ?visual-location> attended {t|new|nil}. jACT-R does not support this query at all. I'm looking at implementing it in the future, but it will be based on the attended status of all the objects at that location.

If you require this query, contact me and I'll see about bumping up its priority.

Movement Tolerances

Canonical ACT-R uses movement tolerance to allow the model to encode a visual object after it has moved from the location returned from the visual search. jACT-R uses this tolerance to also deal with object tracking. If the object is moving too fast, it will exceed the tolerance. When this occurs, the track is lost, the visual buffer state is set to error, but the chunk is maintained in the buffer. (I'm still trying to figure out whether it should be removed from the buffer or not)

Object tracking

So if I've attended to an object why do I have to explicitly tell the visual system to follow it? When something you're looking at moves, it's hard not to follow it. In jACT-R all attended objects are automatically tracked. The only benefit of using the object tracking mechanism is that it stuffs the updated visual location of the object into the visual-location buffer.

If the parameter EnableStickyAttention is true, attended objects will remain attended as long as the object is in the visual field. No amount of movement will shake it.

Asynchrony

All perceptual/motor modules are already asynchronous.

Motor System

There is no manual system, just a global motor system. The motor system is a drop-in replacement with the same basic functionality, plus a few bells and whistles.

Muscle-level parallelism

If EnableMuscleLevelParallelism is true, motor programs can be prepared and executed in parallel so long as they don't overlap in the muscle groups used.

Compound commands

The motor buffer can also contain chunks derived from compound-motor-command. Compound motor commands allow you to execute primitive motor commands in a coordinated manner, even though they are not actual motor commands.

Preparation

Canonical ACT-R provides a separate chunk-type to specify a motor command to be prepared in advance and then executed later. The problem with this mechanism is that it locks you into a limited set of predefined movements (Specifically keyboard and mouse movements), preventing modelers from contributing their own movement types. jACT-R instead uses a meta-slot (:prepare-only) which can be added to any movement request. Using this, any movement command (past or future) can be prepared in advance, and then executed with the execute chunk-type request.

Common Issues

"Could not load model from ..." But it was working a second ago!

This is a strange problem that I have yet to track down. It appears to be a bug in Eclipse's caching of classpath data. One instant everything is working fine, but suddenly (and forever after), the classpath gets corrupted and the runtime will be unable to find your model even though it is on the classpath. Through some digging, a temporary fix is available.

  1. Quit Eclipse
  2. go to ${workspace}/.metadata/.plugins/
  3. delete org.eclipse.pde.core
  4. Restart will take a little longer, but all should be good.

Actually, this will result in the loss of your Target Platform, but that can be fixed here.

If this problem still persists, it suggests that there is a lower level dependency resolution failure of an option dependency. These are not normally reported because they are at a lower level. Check ${userHome}/.jactr/configuration/${projectName}/${runConfiguration}-actual/${timestamp}.log for any errors.

How do I keep a model running when there is no goal?

Setting the model's EnablePersistentExecution parameter to true will enforce conflict resolution when there is no goal and no pending events.

How do I make the model skip cycles where nothing can happen?

Models that are entirely internal (i.e., they are not reactive to the external world), can use the internal event queue to determine when it can possibly fire again. To enable this, set the model's EnableUnusedCycleSkipping parameter to true.

Be warned, though - if the model is connected to another system for its perceptual world, the model will skip cycles, making the detection time wonky.

Model and IDE are maxing out my processor and sucking up memory!

Ah, now this is a problem that I run into regularly. Here's the deal: it's not a bug and it's not a memory leak. Almost all communication in jACT-R is asynchronous. This allows the model to run instrumented with (relatively) little performance impact. The problem arises when the model is actually running too fast (and producing too much data) for the IDE to keep up with. When this happens, network buffers rapidly start filling up and memory is consumed at an alarming rate. The two process then start butting against the resource limits and performance crawls to a halt - potentially with out of memory errors.

This problem most often occurs when persistent execution is on and cycle skipping is off. Basically, your model is plowing through a lot of worthless cycles and generating a bunch of data that is worthless. The IDE chokes on the volume and kaboom.

Your best bet fix is to enable the SynchronizationManager instrument in the run configuration. This will force message synchronization at a periodic (and configurable) rate.

Running embedded

Ok, so you have built your awesome model, tested it, hardened it, and it looks amazing. Now you decide it is time to embed the model in your game engine, or whatever. How do you trim the bloat of the IDE and get just what you need in the arch?

None of the jACT-R core or modules actually depends upon Eclipse or OSGi, rather it is the launchers that do. To effectively embed, you just need to place the core dependencies on the classpath and build a launcher suitable for your needs.

(note: I am working on some code to make this easier, but it is far from ready. If you want to code as it is, just ask)

The simplest solution is just to use org.jactr.entry.Main as a starting point. It's the same entry point the Eclipse launcher uses, but only after it has dynamically hooked up all the extensions (i.e. parsers, compilers, automatic code injection).

However, if you do require custom parsers, or any of the other dynamically resolved elements, you will have to do so yourself, with calls such as : ModelParserFactory.addParser(String extension, Class extends IModelParser> clazz) or ASTParticipantRegistry.addParticipant(String moduleClassName, IASTParticipant participant)

Additionally, you will likely require finer-grained control of the run cycle. org.jactr.tools.throttle.RuntimeThrottler (sorry, no javadoc yet) provides an example of how to limit the cycle time. It can also be adapted to run for one cycle and wait until signaled to run another.

Running from command line

jACT-R did support command line execution back in the day. It was meant to make it easier to execute in "lighter" environments (i.e., within a game engine, controlling a robot, on a cell phone, whatever). As time progressed, it became clear that a general solution for that was at odds with the primary use case: modelers trying to build and analyze their particular phenomenon.

The primary use case relies heavily upon OSGi and Eclipse's dynamic and modular tools. These, however, make traditional command lines much more difficult (and unwieldy). OSGi and Eclipse dynamically check project dependencies and securely set up the various classloaders and resource paths, as well as extensions (for parser, compilers, modules, extensions, etc.) and the actual runtime environment.

This does not mean that you can't run from the command line. You can, but it is a really bad idea to use that mode for the development of a model. Only after you have built, tested, and hardened your model in the IDE should you even think about embedding or command line execution.

Let's assume your model is the awesomsauce and you want to run it command line now. First, you will need the environment.xml file that the IDE generated in the run directory. This tells the org.jactr.entry.Main entry point how to assemble the environment. You will want to strip out the attachments : org.jactr.tools.async.controller.RemoteInterface and org.jactr.tools.tracer.RuntimeTracer, as they are only for IDE runs.

Next, you need the command line that Eclipse uses. To do this, run your model, switch to the debug view and bring up the properties of the running processes. It will include the actual command line. It looks something like this:

/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home/bin/java -Dorg.apache.commons.logging.log=org.apache.commons.logging.impl.Log4JLogger -Dlog4j.configuration=file:/Users/harrison/Archive/Development/workspaces/modeling-dev/mil.navy.nrl.ncarai.associative.fan/jactr-log.xml -Dfile.encoding=MacRoman -Xbootclasspath/p:/Users/harrison/Archive/Development/Apps/3.7/eclipse-model-dev-32/plugins/org.eclipse.jdt.debug_3.7.100.v20120529-1702/jdi.jar -classpath /Users/harrison/Archive/Development/Apps/3.7/eclipse-model-dev-32/plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar org.eclipse.equinox.launcher.Main -application org.jactr.launching.application -data /Users/harrison/.jactr/workspaces/mil.navy.nrl.ncarai.associative.fan -configuration file:/Users/harrison/.jactr/configuration/mil.navy.nrl.ncarai.associative.fan/Embodied-Fan-actual/ -dev file:/Users/harrison/.jactr/configuration/mil.navy.nrl.ncarai.associative.fan/Embodied-Fan-actual/dev.properties -name jACTR -nosplash -os macosx -arch x86 -e file:/Users/harrison/Archive/Development/workspaces/modeling-dev/mil.navy.nrl.ncarai.associative.fan/runs/8.16.12/9.49.28AM/environment.xml

There's a lot of mess in there. Let's break it down.

  • -Dorg.apache.commons.logging.log=org.apache.commons.logging.impl.Log4JLogger merely tells the runtime what logger to use. This is optional and controled view the Logging/Trace tab of the run configuration window.
  • -Dlog4j.configuration=file:/Users/harrison/Archive/Development/workspaces/modeling-dev/mil.navy.nrl.ncarai.associative.fan/jactr-log.xml this is the logging config file used, again optional.
  • -Dfile.encoding=MacRoman -Xbootclasspath/p:/Users/harrison/Archive/Development/Apps/3.7/eclipse-model-dev-32/plugins/org.eclipse.jdt.debug_3.7.100.v20120529-1702/jdi.jar -classpath /Users/harrison/Archive/Development/Apps/3.7/eclipse-model-dev-32/plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar org.eclipse.equinox.launcher.Main -application org.jactr.launching.application This is the main entry point for the jactr-in-eclipse application. Jar files will be dependent upon the version of eclipse you have installed, and where you have it installed.
  • -data /Users/harrison/.jactr/workspaces/mil.navy.nrl.ncarai.associative.fan -configuration file:/Users/harrison/.jactr/configuration/mil.navy.nrl.ncarai.associative.fan/Embodied-Fan-actual/ -dev file:/Users/harrison/.jactr/configuration/mil.navy.nrl.ncarai.associative.fan/Embodied-Fan-actual/dev.properties Anything in the ${home}/.jactr/ directory tree is where eclipse stores cached data to accelerated subsequent launches. The contents of these directories is generated on runtime, unless the data already exists. The dev.properties file tells the environment how to combine your workspace projects (i.e., your model) and installed bundles.
  • -name jACTR -nosplash -os macosx -arch x86 These are platform specific flags to tell the runtime what to expect (line endings, path separators, etc).
  • -e file:/Users/harrison/Archive/Development/workspaces/modeling-dev/mil.navy.nrl.ncarai.associative.fan/runs/8.16.12/9.49.28AM/environment.xml this tells the jACT-R environment to build the environment and wait for the IDE to signal the start. Using -r, instead, will build the environment and immediately execute it.

That will get you up and running on the command line, but again, I don't recommend it. The IDE provides you a ton of tools that will help you understand your model. Command line ninjas will always be at a disadvantage here.

Using IDE

The IDE has been built upon Eclipse, a powerful, multi-language IDE. It brings with it many features:

Quick Tips

Project management

  • Revert edits
  • Use source control (i.e. CVS, SVN)
  • Import/Export projects

User Interface

  • Change code or annotation (e.g. error, warning, reference, bookmark) colors?
  • Use collaborative editing?

Tooling

The IDE includes many useful tools both for tracing/inspecting model execution and for recording information.

Broadly speaking, there are tool classes of tools: instruments and tracers.

Instruments

Instruments are simple tools that can be attached at runtime to any model or model element and are intended to modify or record behavior. They do not interface with the IDE.
Instruments can be easily built by extending the IInstrument interface. Instruments can be made visible to the IDE by using the instruments extension point.

Tracers

Tracers are special instruments that have a GUI component installed in the IDE. These are a tad more complicated, but certainly within the realm of an experienced java programmer.
Tracers typically record and send massive amounts of data to the IDE, so care must be taken to handle the information in a responsive manner that doesn't make the IDE unusable.

General Probe

The general probe is a combination two tools that provides a general capacity to track arbitrary parameters over time. On the runtime side, org.jactr.tools.grapher.core.GeneralProbe selects model elements (productions, chunks, types, modules, buffers, etc), based on regular expression name matches (quick and easy to implement, but a little wonky if you don't know reg-ex), and attaches probes to them. The general probe then sends that data to the IDE where it is rendered in the jACT-R Probes view.  

Probe Configuration

GeneralProbe reads a user specified probe descriptor file that defines what model elements should be tracked and what parameters recorded. These probe files are usually stored within the configuration/ directory so that it is accessible on the classpath. The following is a useful template for building your own custom probes:

<!-- how frequently should we poll the probes (seconds) -->

<instrument window="1">

<!-- probes can be grouped together so that they share a common axis -->

<group id="harvestors">

<!-- this will just select a model named 'sub', .* will select all -->

<model pattern="sub">

<!-- harvest productions -->

<!-- just probe productions with '-harvest' at the end -->

<production pattern=".*-harvest">

<!-- just record the expected utility -->

<probe pattern="ExpectedUtility" poll="true" />

</production>

</model>

</group>

</instrument>

You can also probe chunks:

<chunk-type pattern="configural">

<chunk pattern="configural-.">

<probe pattern="BaseLevelActivation" poll="true"/>

</chunk>

</chunk-type>

The above will only probe configural chunks that match the name 'configural-?'. The chunk-type tag is optional.

Modules can also be probed:

<module pattern=".*Retrieval.*">

<probe pattern=".*Threshold" poll="true"/>

</module>

Using probes

Once the probe file is defined, you merely have to enable it in the run configuration. Merely enable General Probe in the Logging/Trace tab of the run configuration and provide it the probe file.

200812160953.jpg

As the model runs, the probes view will graph the values over time. The graphs aren't good enough for publication, but as a first pass monitor, it's value cannot be overstated. You'll understand immediately why a production is loosing out in conflict resolution or a chunk is failing to be retrieved.

200812161014.jpg

The viewer also has two export options: PNG image file or CSV data output.

Model Recording and Playback

The IDE supports the saving and replaying of runtime data. Specifically, on the Logging/Trace tab of the run configuration you can select to send tracer data to IDE and/or file (but at least one). This data will be saved to a folder named sessionData. Right clicking on this folder in the IDE will give you the option jACT-R->Playback. You can then step through the data in the log view or probe viewer.

Caution: Session data can consume a large amount of space. Clean up after yourself.

Visicon Viewer

To help debug models that rely upon the visual system, jACT-R includes a graphical visicon viewer. It's a little rough around the edges right now, but it is certainly functional.

Using Visicon Viewer

To use, simply enable the visicon tracer in the run configuration for your model. Once the model starts, the visicon viewer (lower-left corner of the IDE) should be updated.

200812161002.jpg

200812161002.jpg

Currently the color coding is used to signal search and encoding results. A red border means the object there has been found, a solid red means it is the current focus of attention. If you mouse over the object you will see a listing of its primitive features.

Future updates will include better formatting of the information.