Back in the archaic era when I began writing about computers (1985), each new product announcement brought increases in power and speed, and people upgraded their machines frequently. Today weve leveled off, and sticking with a medium priced generic computer will usually be a safe option. Mobile devices are certainly interesting to watch, but Im looking these days for the incremental advances that will lead to serious next generation changes.
Fortunately, the advances being made beyond the commercial market give me plenty to write about. Consider that the Defense Advanced Research Projects Agency has recently put out a call (technically an RFI, or Request for Information) about how to push computers beyond todays capabilities so that they start acting more like a human brain. Not a full fledged human intelligence on a chip, of course that may take quite some time but a digital way to model the neo-cortex, that part of the brain that deals with sensory perception and, yes, thought.
DARPA is talking about a cortical processor that can adapt to changing situations and merge with existing developments in machine learning and neural networks. Decoding that, were looking at a long-range attempt to build robots that at least mimic the brain of a mammal like a mouse or a cat. A chip that can do this would let computers work through massive amounts of information and help us make sense of them without the need for human programmers.
Not quite science fiction
All this sounds science fictional, but be aware that IBM is working on prototype chips that do much to make this goal possible. Its a kind of reverse-engineering of the brain that moves beyond current computing techniques to understand and act on information by using external sources or even sensations like sight or touch. The beauty of the concept is that a properly designed chip of this kind can operate with much less of a drain on power than todays computers, while mining the gigantic databases we are compiling on a daily basis.
With current advances in miniaturization, we can deploy vast networks of sensors to monitor things like ocean levels (think tsunami warnings) or weather changes (think tornadoes or other fast-moving systems). Our problem isnt in accumulating data. In fact, the real problem is that were gathering data so fast that making sense of all of it lags far behind the collecting. The Kepler mission to study planets around other stars has about two years of data waiting to be analyzed, a task that could be immeasurably sped up with this kind of technology.
Brain in a box
Call it cognitive computing. Its a synthesis of software inspired by the human brain. IBMs Dharmendra Modha has spoken of a long-range goal of systems that contain up to 10 billion digital neurons and 100 trillion synaptic connections. He calls it a brain in a box. Right now were a long way from that, with chips holding a mere 256 of these neurons and tens of thousands of programmable synapses that make connections between the neurons. The assumption is that this system can be scaled up with further research. To that end, IBM has recently unveiled a new software ecosystem tailored for programming such chips, all of this as part of a long-term project called SyNAPSE that is receiving funding from DARPA.
Think Watson was smart on Jeopardy? Were going well beyond that; making sense out of the bewildering influx of data will give us new insights into everything from medicine to the environment. Visual data might flow from such chips through wearable glasses to help the visually impaired. Improved robots could begin managing dangerous but mentally demanding jobs. The point is that a change to the computing model breeds possibilities we havent conceived of yet. Youll want to track this work as you look toward business opportunities ahead.
Paul A. Gilster is the author of several books on technology. Reach him at gilster@mindspring.com.



