10 April 2010

Machine Consciousness

Challenges don't get much bigger than trying to create artificial consciousness. Some doubt if it can be done - or if it ever should. Bolder researchers are not put off, though. Researchers consider machine consciousness as a grand challenge, like putting a man on the moon. One landmark is the recently developed ‘Conscale’, developed by researchers at the University of Madrid in Spain to compare the intelligence of various software agents - and biological ones too. IDA assigns sailors in the US navy to new jobs when they finish a tour of duty and has to juggle naval policies, job requirements, changing costs and sailors' needs. Like people, IDA has ‘conscious’ and ‘unconscious’ levels of processing. At the unconscious level she deploys software agents to gather data and process information. These agents compete to enter IDA's "conscious" workspace, where they interact with each other and decisions get made. The updated Learning IDA, or LIDA, was completed this year. She learns from what reaches her consciousness and uses this to guide future decisions. LIDA also has the benefit of ‘emotions’ - high-level goals that guide her decision-making. Another advance emerged from designing robots able to maintain their function after being damaged.


In 2006, researchers at the University of Vermont in Burlington designed a walking robot with a continuously updated internal model of itself. If damaged, this self-knowledge allows it to devise an alternative gait using its remaining abilities. Having an internal ‘imagined’ model of ourselves is considered a key part of human sentience, taking the robot closer to self-awareness. Along with an internal model, the robot developed by researchers at the University of Sussex, UK, is also anatomically human-like. A robot with a body that is very close to a human's will develop cognition that is closer to the human variety. None of these approaches solve what many consider to be the ‘hard problem’ of consciousness: subjective awareness. No one yet knows how to design the software for that. But as machines grow in sophistication, the hard problem may simply evaporate - either because awareness emerges spontaneously or because we will simply assume it has emerged without knowing for sure. After all, when it comes to other humans, we can only assume they have subjective awareness too. We have no way of proving we are not the only self-aware individual in a world of unaware ‘zombies’.

More information:

http://www.newscientist.com/article/mg20627542.000-picking-our-brains-can-we-make-a-conscious-machine.html