You are here

CommonReality

What is CommonReality, Sensors, and Agents?

Cognitive modeling is increasingly focusing on embodiment, not only to improve model fidelity, but because additional affordances and cognitive short-cuts. In ACT-R this is accomplished via the perceptual/motor modules (i.e. visual, aural, etc). But that perception on the cognitive side. We still need a means of accessing the sensation. Where as the Lisp implementation uses devices, jACT-R takes a step further back.

CommonReality

CommonReality is a simulation management system. It was built to provide a general mechanism to manage agents (i.e. models, actual humans, AI systems) and sensors (i.e. simulation systems), their communication, time management, and control. It was designed with the intent that all these various pieces could be running anywhere, on any hardware. Network latency and the added layer of abstraction do limit simulation speed, but I believe the added flexibility is worth it.

At the heart of CommonReality is a system that acts as a central broker. It controls the clock (which can be managed in a configurable manner), control flow and state changes, and communication among participants (but point-to-point is possible).

Agents

Agents are the cognitive entities in a simulation. They are the consumers of afferent objects, aka sensations, generated by a sensor for it. Agents can also make use of a combination of efferent objects and their commands which permit the agent to execution actions in the simulation (or reality). The agent handles the communication with CommonReality and the translation of afferent and efferent objects for the cognitive model, in this case jACT-R.

The structure of afferent objects is basically just a unique identifier and a series of properties. The set of properties is modality specific. There are recommended properties for visual and aural modalities. Sensors are free to implement the recommended properties, but may supply additional perceptual information. As the system grows and evolves, a discovery mechanism will be developed, but for now it requires consideration one the sensor and agent sides.

Sensors

Sensors provide a perspective on some element(s) of a simulation to the agents it is aware of. Most sensors will provide information to all agents, but this is not strictly required. For any sensation, the sensor will request new afferent objects to represent the sensation to each of the agents. It will then update those objects and communicate those changes. When the sensation is no longer available, it will be removed. Some sensors may provide efferent objects, and templates to create proper commands to control them. There are a handful of sensors already available, and hopefully with collaborative effort there will be many more.

The XMLSensor merely translates a configuration file into raw afferent objects. It does so based on timed or programmatic events. Because it can provide any afferent object, it is ideal for simplified testing. However, more complex situations require other options.

DefaultAuralSensor provides a rough skeleton for programmatically providing auditory information. This is useful as a starting point for other sensors. DefaultSpeechSensor serves a similar purpose.

The DefaultKeyboardSensor provides hands and fingers to models that can then control the actual keyboard (by using java.awt.Robot). Mouse support is on the to-do list.

Finally, there is an interface for playerstage that is continually evolving.

A framework exists for aiding in the development of more complex motor control sensors. Best practices for developing sensors is still evolving, but the provided sensors will get you started.

A tutorial is available covering development with CommonReality. It can be found in the tutorials section.