You are here

Vocal module and efferent events

It's taken quite some time, but there is a new module in the distribution. The genesis has entirely to do with the oft delayed implementation of efferent commands in Common Reality. It took some time to come up with a viable and satisfactory solution to models making motor requests, having them accepted/rejected, and then ultimately executed by CR.

While the manual module is still off in the distance (as it requires interfacing with a specific device, i.e. Swing), the vocal module relies upon the same basic infrastructure but isn't tied explicitly to a device. The latest version (available now), has an initial implementation of the vocal module (org.jactr.modules.pm.vocal.six.DefaultVocalModule6), with the standard set of parameters. One thing to note, however, is that since hearing the vocalizations requires Common Reality to take the vocalization and then return an AfferentObject for the aural sensation, subvocalizations do not produce aural events. Additionally, since there is no pre-canned aural sensor (unlike org.commonreality.sensors.speech.DefaultSpeechSensor) there will be no hearing anything just yet.

This brings me to the challenge that some sensors present. Visual, aural, and manual sensors are necessarily yoked to a specific interface and are very difficult to generalize with default implementations. Currently the XMLSensor is still the best way to generate arbitrary afferent objects. However, with my work at NRL, I've implemented additional sensors to connect to the Player/Stage robot simulator. This has provided a few insights that are leading towards skeletal default sensor implementations.

Stay tuned.