You are here

Developing for CommonReality: Extending BaseSensor

Developing for CommonReality: Extending BaseSensor

When a model's interaction with the physical characteristics of some device become the focus, merely duct-taping the model to a system (such as with an IExtension) will not do. In these situations, it is time to develop an interface within CommonReality, jACT-R's abstraction layer. Briefly, CommonReality divides a simulation (or real system) into participants, specifically agents (i.e. models, AIs, or real people) and the inappropriately named sensors (i.e. sensor systems, effectors, simulators). jACT-R includes an agent interface to CommonReality, and hopefully the future will see interfaces for other cognitive architectures. There are existing sensors for keyboard control, generic speech generation, seeing Java GUIs, simple static simulations, seeing and moving within PlayerStage simulations, and even for controlling NRL's MDS robots.

The CommonReality system manages the participants' life-cycles, configuration, time synchronization and messaging, allowing participants to be executed in various configurations. The underlying protocols and transports are all configurable, decoupling the system from running within a single VM, machine or language. Unfortunately, building sensors (or agents) is a fairly involved process involving many different object managers (i.e. afferent, efferent, simulation object), listeners, and different threads of execution. Over the course of developing the existing sensors, a few patterns have become apparent, particularly for perceptual sensors (as opposed to effectors). The BaseSensor codifies many of these and will be continually developed to make it easier to quickly build sensors while focusing strictly on the interfacing relevant to your task.

BaseSensor

The BaseSensor bundle includes two abstract implementations and two interfaces that need to be considered. BaseSensor itself handles the basic life-cycle of the sensor and thread management. On each cycle it sends any pending messages out to CommonReality. Next, it calls methods to trigger the processing of perceptual and motor information for each of the connected agents (BaseSensor handles multiple agents with no difficulty). It then synchronizes with the clock before looping back. For the most part, extenders will just want to customize the configuration, initialization, start, and shutdown methods (being sure to call the super class implementations when done). The processing of percepts is typically delegated to the PerceptManager.

PerceptManager

The PerceptManager provides a consistent mechanism for the linking of program objects (or events) to percepts to be passed to connected agents. The PerceptManager doesn't create the percepts itself, rather it delegates that responsibility to the installed IObjectCreators and IObjectProcessors. By delegating, it is easier to extend or modify the perceptual processing on the fly without having to dig into any other code. The general concept is that the code that is interacting with the interfaced system will want to generate percepts for some set of objects. Those objects will serve as keys to the PerceptManager. When an object changes (or is created), markAsDirty(Object) is called. When the object is removed, flagForRemoval(Object) is called. When processDirtyObjects() is called (typically from BaseSensor.processPercepts()), PerceptManager routes new objects to the IObjectCreators, updated objects to the IObjectProcessors, and removals to the owning IObjectCreators (allowing them to veto the removal). It packages the resulting state updates and passes them back to the BaseSensor for packaging for CommonReality. Generally, you can use the default implementation provided by the BaseSensor.

IObjectCreator

ObjectCreators are intended to link the programmatic objects with individual percepts (on a per-agent basis). Typically an ObjectCreator will be associated with a single type of programmatic object (i.e. string) and a single type of percept (i.e. visual). If the object creator can handle the programmatic object, it will be asked to create an IObjectKey (merely a structure linking the programmatic object, percept, and creator). Next the creator will actually create the basic percept and any default values for it. Actual deep perceptual processing is typically passed off to the object processors to handle. The object creator is also responsible for handling the clean up of the object upon removal (i.e. the creator may attach listeners to the programmatic object to track its state and trigger PerceptManager.markAsDirty, which you'd want to detach after the object has been removed).

IObjectProcessor

The object processors are the actual work-horses of the BaseSensor. If the processor can handle the associated object key, it is then asked to perform its actual processing. This is where the properties of the percept are actually extracted and assigned to the percept.

Threading

The design of this system is such that it can handle multiple different threading models. You could use listeners attached to the programmatic objects to merely mark them as dirty, allowing the PerceptManager to process them in BaseSensor's main thread. Or, you could do the processing in the listeners, and merely harvest the results when PerceptManager calls the IObjectProcessors. You could even call PerceptManager.processDirtyObjects() on a separate thread as well. Hell, you could even overload the method and split that processing up across multiple threads. It all depends on the demands of your interface.

StringSensor

Let's put this system to a very trivial example. The StringSensor takes a single argument, "TextSource" which is a string. The sensor will break that string at the white spaces and then feed back to CommonReality visual percepts of each word one at a time per second. Let's start with the lowest level and work up.

StringObjectCreator

The object creator extends the abstract implementation which merely creates a DefaultObjectKey and an IAfferentObject from that key. It handles only strings, and using the abstract implementations initialize() method, sets the percept as a visual one.


  public StringObjectCreator()
  {
  }


  public boolean handles(Object object)
  {
    return object instanceof String;
  }

  /**
   * initialize some default values
   */
  @Override
  protected void initialize(DefaultObjectKey objectKey,
      IMutableObject afferentPercept)
  {
    // make it a visual percept
    afferentPercept.setProperty(IVisualPropertyHandler.IS_VISUAL, Boolean.TRUE);
  }

StringObjectProcessor

The object processor also extends the abstract implementation and only handles strings. It's process method sets all the required (and optional) properties of visual percepts. If your system had more properties, there is nothing stopping you from assigning them as well. These just represent the barebones, some of which jACT-R requires, some not.


  public StringObjectProcessor()
  {
  }


  public boolean handles(DefaultObjectKey object)
  {
    return object.getObject() instanceof String;
  }

  public void process(DefaultObjectKey object, IMutableObject simulationObject)
  {
    /*
     * let's spoof some visual properties. Ideally you wouldn't set all the
     * properties every time, rather just those that change on each call to
     * process.
     */
    String string = (String) object.getObject();


    // currently visible, required
    simulationObject.setProperty(IVisualPropertyHandler.VISIBLE, Boolean.TRUE);


    // 1 meter in front of the eye, required
    simulationObject.setProperty(IVisualPropertyHandler.RETINAL_DISTANCE, 1);


    // 1 degree visual angle up and to the right (0,0 is center), required
    simulationObject.setProperty(IVisualPropertyHandler.RETINAL_LOCATION,
        new double[] { 1, 1 });


    // 2 degrees visual angle in size, required
    simulationObject.setProperty(IVisualPropertyHandler.RETINAL_SIZE,
        new double[] { 2, 2 });


    // type of visual percept, required
    simulationObject.setProperty(IVisualPropertyHandler.TYPE,
        new String[] { "text" });


    // actual unique identifier, required
    simulationObject.setProperty(IVisualPropertyHandler.TOKEN, string);


    // text value, if available
    simulationObject.setProperty(IVisualPropertyHandler.TEXT, string);


    // horizontal, optional
    simulationObject.setProperty(IVisualPropertyHandler.SLOPE, 0);


    // RGBA, optional
    simulationObject.setProperty(IVisualPropertyHandler.COLOR, new double[] {
        1, 0, 0, 1 });
  }

StringSensor

Normally, you'd have your system marking objects as dirty or flagging them for removal through some other execution flow. In this trivial example, I'm just going to do it at the top of each cycle (i.e. before the sensor sends the pending messages). It simply checks the time to see if it should remove (or send) a word. If so, it does its processing and notifies the PerceptManager. That's it. The super class takes care of all the rest.


/**
   * ideally you'd perform some configuration here. these options are provided
   * (typically) from the environment configuration file at the start of the
   * run.
   */
  @Override
  public void configure(Map<String, String> options) throws Exception
  {
    super.configure(options);


    _wordsToSend = new ArrayList<String>();
    /*
     * TextSource property is defined in the plugin.xml extension and can be
     * customized through the run configuration
     */
    for (String word : options.get("TextSource").split(" "))
      _wordsToSend.add(word);

    /**
     * install the default creator and processor
     */
    getPerceptManager().install(new StringObjectCreator());
    getPerceptManager().install(new StringObjectProcessor());

    /*
     * just for giggles, let's make it realtime (ish)
     */
    setRealtimeClockEnabled(true);  }




  /**
   * at the top of the cycle, we peel of a new word and create a percept for it,
   * removing the old word in the process. This is a silly single threaded
   * example. Normally, you'd have some other thread churning along in response
   * to other events and interacting with the percept manager
   */
  @Override
  protected void startOfCycle()
  {
    if (_sendNextWordAt <= getClock().getTime())
    {
      if (_lastWord != null)
      {
        if (LOGGER.isDebugEnabled())
          LOGGER.debug(String.format("Removing %s", _lastWord));


        getPerceptManager().flagForRemoval(_lastWord);
      }
      _lastWord = null;


      if (_wordsToSend.size() > 0)
      {
        /*
         * peel off a word, notify the percept manager to remove the previous
         * one and add the new one
         */
        String newWord = _wordsToSend.remove(0);


        if (LOGGER.isDebugEnabled())
          LOGGER.debug(String.format("Perceiving new word %s", newWord));


        // percept manager will handle all the heavy lifting for us
        getPerceptManager().markAsDirty(newWord);
        _lastWord = newWord;
        _sendNextWordAt = getClock().getTime() + 1; // 1 second later
      }
      else
        // setting this to + inf will prevent any further processing
        _sendNextWordAt = Double.POSITIVE_INFINITY;
    }
  }

 

Running the Model

If you look in the second example project (sensors.example.test) you will find an incredibly simple model. It merely looks for anything on the screen and runs indefinitely. Take a look at the included run configuration (string.visual-test). Under the CommonReality tab, you will see the StringSensor selected and it's single parameter. We're telling the system to launch CommonReality with this model (as an agent) and the StringSensor. We could just as easily replace the StringSensor with any other sensor that provides visual information and the model would be non-the-wiser. Go ahead, give it a run. You should see some stuff in the visicon viewer and the model should output each of the words individually.

200912291454.jpg

Java-specific Nuisances

As we've seen in the past three articles, the META-INF/MANIFEST.MF file needs to be tweaked. First, let's look at the one in org.commonreality.sensors.example. First, we add the dependency to org.commonreality.sensors.base, otherwise we wouldn't have access to the base classes. Next, we export the org.commonreality.sensors.example packages so that they are visible to the core runtime. If you do not do this, you will be plagued by class not found exceptions. Finally, we provide one Eclipse extensions that allow the runtime to detect the sensor (making it visible to the IDE as well). We can also provide the parameter that the sensor takes allowing us to configure it in the IDE.

200912291459.jpg
 
Turning our attention to the sensors.example.test project, we also need to add a dependency, this time to org.commonreality.sensors.example. That's it. Everything is wired together.

References