You are here

Why, why, why?

Ok, sometimes I get questions such as "why did you sacrifice at least a full year of your graduate career to reimplementing a perfectly good system?" (actually, I'm paraphrasing because the actual questions typically question my sanity). Anyway...

I've been pondering many issues related to real time performance (or near-realtime, as the case actually is) in cognitive architectures because the current state-of-the-art isn't particularly state-of-the-art (I am including jACT-R in that statement). But what java does provide is much greater flexibility plus a whole slew of supported features. Case in point:

I was showing the boss-man the latest iteration of our monkey learning models in player/stage, while I happened to be running a 1000+ iterative run of a model that illustrates life-span learning of left from right hands. This was more than a tad taxing on the laptop (BTW, while I heartily endorse jACT-R for running near-real-time systems, if you don't have a multiprocessing machine, don't even think about running jACT-R. I use threads too aggressively for a single core, single CPU system). With the iterative run, plus two concurrent models, plus the common reality simulation broker, plus playerstage running, processing time was getting a little tight (186% CPU utilization, baby). For the first time ever, stage was running faster than jACT-R. What ended up happening was that the perceptual processing threads (one per model) were getting bogged down, trying to keep up with all the sensor input coming from stage. The events were backing up such that by the time the model was attending to something, it was still working its way through the queue, resulting in the model reacting to stale percepts.

The fix was actually incredibly simple. Instead of queueing up every perceptual event on the perceptual thread, I merely queued up the events internally and request that the perceptual thread process them as soon as possible. This enables perceptual events to be collapsed when the perceptual thread gets bogged down, allowing it to ignore stale updates and just focus on the most recent (and relevant) changes. The end result is sooooo much more responsive. It even helps when faced with the greatest realtime challenge so far - dynamic vision (i.e. searching and encoding objects that are moving or while the agent is moving).. helps, but still doesn't fix.

If I find myself really bored, I might post some benchmarks. I doubt it though because I've gotten the majority of the version 1 logic completed for the static production analysis tool and it works wonderfully. So far, it can reliably predict production sequences for most productions (assuming they aren't entirely variable based like the example addition model). I'm going to see if I can cobble together a gui this weekend. This tool will make everyone's lives significantly easier.. but more importantly, it will make my life easier because the monkey model is really complex (dude, the thing seeks out food, moves to it, learns to avoid other monkeys by adopting their perspective and predicting their actions.. come one).

Another release by monday, incorporating some additional fixes relevant to iterative runs (and their penchant for really testing the thread safety of my code).