28 April 2012

Expressive Car

It may just be a matter of time before we turn over control of our cars to robot intelligence. But before we do, we may have to give each vehicle a voice, eyes, body language, and even an avatar. Such cars will be more user-friendly, researchers say from the MIT Media Lab, because they will be able to communicate their intent to the people around them. Are you crossing the road in front of a driverless car? The car could let you know it sees you by dilating it's LED ‘pupils’ and swiveling its headlights to follow you as you cross. Or it could project a smiley face on its wind screen. A car may be perfectly programmed to never hit a person no matter what, but how does a pedestrian know that the car sees him?


In a prototype presented today at the Media Lab's spring meeting in Cambridge, Massachusetts, researchers showed what such a car might look like. With working eye-headlights, micro-speakers designed to send narrowly focused audio messages to pedestrians and human drivers, and a Microsoft Kinect game sensor to detect when people pass in front of the car, the vehicle (pictured) bore a passing resemblance to a futuristic incarnation of a Volkswagen Beetle. Researchers admit that their bells and whistles are mostly just design suggestions for now. It will take a lot of testing to see what combination of sensors, lighting rigs and heads-up displays best lets people know what a robot car is going to do next.

More information:

27 April 2012

Scanning the Brain for Errors

Researchers are using new technology to predict, in advance, when people will make a mistake. They have been testing subjects taking the SAT exam in math. Our bodies and brains tend to give us good cues about when we are becoming stressed, fatigued or overwhelmed. But what if, with near exact precision, you could predict when heightened levels of fatigue were about to cause you to make a mistake? University of Arizona researchers believe they found a way – and with about 80 percent accuracy. They have been working on the Animal Watch tutoring program with researchers in the UA's School of Information: Science, Technology and Arts, or SISTA.


Noticing English language learners were having more difficulty answering problems, researchers set out on an investigation for their research. Using electroencephalography, or EEG, technology, researchers began studying specific brain wave activity in students taking the math portion of the popular, but challenging, SAT exam. Measuring the activity, they were able to detect with 80 percent accuracy whether a student – all of them university students – would answer a question incorrectly about 20 seconds after they began the question. The findings have important implication for students and educators. With the findings, EEG estimates of engagement and cognitive workload predict math problem solving outcomes.

More information:

26 April 2012

Mind-Controlled Robot

Swiss scientists have demonstrated how a partially paralyzed person can control a robot by thought alone, a step they hope will one day allow immobile people to interact with their surroundings through so-called avatars. Similar experiments have taken place in the United States and Germany, but they involved either able-bodied patients or invasive brain implants. A team at Switzerland’s Federal Institute of Technology in Lausanne used only a simple head cap to record the brain signals of a patient, who was at a hospital in the southern Swiss town of Sion 162 miles away. The resulting instructions - left or right - were then transmitted to a foot-tall robot scooting around the Lausanne lab. The patient lost control of his legs and fingers in a fall, and now is considered partially quadriplegic. The patient said controlling the robot wasn’t hard on a good day. Background noise caused by pain or even a wandering mind has emerged as a major challenge in the research of so-called brain-computer interfaces since they first began to be tested on humans more than a decade ago.

While human brains are perfectly capable of performing several tasks at once, paralyzed persons would have to focus the entire time they are directing their devices. To get around this problem, his team decided to program the computer that decodes the signal so that it works in a similar way to the brain’s subconscious. Once a command such as ‘walk forward’ has been sent, the computer will execute it until it receives a command to stop or the robot encounters an obstacle. The robot itself is an advance on a previous project that let patients control an electric wheelchair. By using a robot complete with a camera and screen, users can extend their virtual presence to places that are arduous to reach with a wheelchair, such as an art gallery or a wedding abroad. Researchers said that although the device already has been tested at patients’ homes, it isn’t as easy to use as some commercially available gadgets that employ brain signals to control simple toys, such as Mattel’s popular MindFlex headset.

More information:

24 April 2012

Self-Sculpting Sand

Imagine that you have a big box of sand in which you bury a tiny model of a footstool. A few seconds later, you reach into the box and pull out a full-size footstool: The sand has assembled itself into a large-scale replica of the model. That may sound like a scene from a Harry Potter novel, but it’s the vision animating a research project at the Distributed Robotics Laboratory (DRL) at MIT’s Computer Science and Artificial Intelligence Laboratory. Researchers describe experiments in which they tested the algorithms on somewhat larger particles — cubes about 10 millimeters to an edge, with rudimentary microprocessors inside and very unusual magnets on four of their sides.

Unlike many other approaches to reconfigurable robots, smart sand uses a subtractive method, akin to stone carving, rather than an additive method, akin to snapping LEGO blocks together. A heap of smart sand would be analogous to the rough block of stone that a sculptor begins with. The individual grains would pass messages back and forth and selectively attach to each other to form a three-dimensional object; the grains not necessary to build that object would simply fall away. When the object had served its purpose, it would be returned to the heap. Its constituent grains would detach from each other, becoming free to participate in the formation of a new shape.

More information:

22 April 2012

3D Planning Tool for Cities

Noise levels, fine particulate matter, traffic volumes – these data are of interest to urban planners and residents alike. A three-dimensional presentation will soon make it easier to handle them: as the user virtually moves through his city, the corresponding data are displayed as green, yellow or red dots. Fine dust, aircraft noise and the buzz of highways have a negative impact on a city‘s inhabitants. Urban planners have to take a lot of information into consideration when planning new highways or airport construction. What is the best way to execute a building project? To what extent can the ears – and nerves – of local residents be protected from noise? Previously, experts used simulation models to determine these data. The latest EU directives provide the basis for this. They obtain the data as 2D survey maps; however, these are often difficult to interpret, since the spatial information is missing.


That will get easier in the future: urban planners will be able to virtually move, with computer assistance, through a three-dimensional view of the city. In other words, they will “take a walk” through the streets. No 3D glasses required, though they would be a good idea for the perfect 3D impression. The corresponding values from the simulation “float” at the associated locations on the 3D map – where noise data might be displayed using red, yellow or green boxes. The distances between data points currently equal five meters, but this can be adjusted according to need. The user determines how the map is displayed – selecting a standpoint, zooming in to street level or selecting a bird’s-eye perspective. This can provide quick help in locating problems such as regions with heavy noise pollution. The 3D map was developed by researchers at the Fraunhofer Institute for Industrial Engineering IAO and the Fraunhofer Institute for Building Physics IBP.

More information:

20 April 2012

Predictive Neuroscience

The discovery, using state-of-the-art informatics tools, increases the likelihood that it will be possible to predict much of the fundamental structure and function of the brain without having to measure every aspect of it. That in turn makes the Holy Grail of modelling the brain in silico—the goal of the proposed Human Brain Project—a more realistic, less Herculean, prospect. Within a cortical column, the basic processing unit of the mammalian brain, there are roughly 300 different neuronal types. These types are defined both by their anatomical structure and by their electrical properties, and their electrical properties are in turn defined by the combination of ion channels they present—the tiny pores in their cell membranes through which electrical current passes, which make communication between neurons possible. Scientists would like to be able to predict, based on a minimal set of experimental data, which combination of ion channels a neuron presents. They know that genes are often expressed together, perhaps because two genes share a common promoter—the stretch of DNA that allows a gene to be transcribed and, ultimately, translated into a functioning protein—or because one gene modifies the activity of another.


The expression of certain gene combinations is therefore informative about a neuron's characteristics, and researchers hypothesised that they could extract rules from gene expression patterns to predict those characteristics. They took a dataset that researchers had collected a few years ago, in which they recorded the expression of 26 genes encoding ion channels in different neuronal types from the rat brain. They also had data classifying those types according to a neuron's morphology, its electrophysiological properties and its position within the six, anatomically distinct layers of the cortex. They found that, based on the classification data alone, they could predict those previously measured ion channel patterns with 78 per cent accuracy. And when they added in a subset of data about the ion channels to the classification data, as input to their data-mining programme, they were able to boost that accuracy to 87 per cent for the more commonly occurring neuronal types. Researchers could also use such rules to explore the roles of different genes in regulating transcription processes. And importantly, if rules exist for ion channels, they are also likely to exist for other aspects of brain organisation.

More information:

04 April 2012

Engineering Intelligence

Do we actually want machines to interact with humans in an emotional way? Will it be possible for them to interact with us? Researchers at UCL London are working on machine learning and applications of probability in information processing. The world expects to be able to interact naturally with machines by expecting them to understand what we say and move naturally in our environment. There are already research programmes that attempt to gauge the emotion in someone’s voice or face but I’m more interested in a machine that could recognise the emotional significance of an event for a human. The ultimate dream in the future for researchers in the field of machine and information learning would be for machines to not only comprehend what we say in the pure semantic sense, but in an emotional sense as well.


How might a machine in the future react when reading an emotional novel? Could they ever act similarly to humans? Could these intelligent machines feel sad or feel happy? Would these machines understand the emotional consequences of the human sentence ‘I’ve lost my job’? These questions represent some of the fundamental challenges that lie ahead – necessitating a large database of information about humans and the human world. Any machine that wishes to understand the complexity of social interaction, society and behaviour, needs to have some grasp of what it really means to be human. Perhaps the initial step in ever beginning to reverse intelligence is to first understand the theoretical aspects of information processing in the brain. From this, researchers can then analyse how an ‘artificial brain’ would be able to process or store information in the same way.

More information:

http://www.cam.ac.uk/research/news/how-to-engineer-intelligence/

03 April 2012

Robots Acting Like Insects

Researchers in Germany are developing robotic vehicles for transporting goods around a warehouse that organise themselves like a swarm of insects. The autonomous Multishuttle Moves vehicles, developed at the Fraunhofer Institute for Material Flow and Logistics (IML) in Dortmund, operate without the need for a central controller to allocate tasks and give precise instructions. When the warehouse receives an order, the shuttles communicate with one another via a wireless internet connection and the closest free vehicle takes over and completes the task.


Researchers are operating 50 shuttles developed with material-handling and logistics automation company Dematic in a 1,000 m2 replica warehouse comprising storage shelves for 600 small-part carriers and eight picking stations. The vehicles move around the warehouse without external instruction using a hybrid sensor concept based on radio signals, distance and acceleration sensors and laser scanners to work out the shortest route to any destination and avoid collisions. The autonomous system is considerably more flexible and scalable than conventional technology with roller tracks.

More information:

http://www.theengineer.co.uk/sectors/automotive/news/robots-to-organise-themselves-like-a-swarm-of-insects/1012101.article