30 April 2013

Deep Learning

With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data. The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial neural network—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.


With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft researchers demonstrated of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. Also researchers identified molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

More information:

24 April 2013

Virtual Traveller

Thanks to a virtual reality (VR) and telepresence mashup, you no longer have to travel the globe to visit friends or wander around ancient ruins. The VR system used for that was developed at the Bauhaus University in Weimar, Germany. It combines 3D glasses and a hack of Microsoft's Kinect to allow the life-size images of up to six people to be beamed to distant locations and recreated in a virtual space – in 3D and in real time. It has a big hint of Star Trek's holodeck about it. Not only does this mash-up of telepresence and VR promise to make long-distance communication more immersive and fun; it is already being applied to an archaeology project that could help reveal the ancient secrets of European rock art. To introduce telepresence, the team networked two displays, with each screen incorporating a Kinect depth camera, which films its viewers.


To create a multi-user VR system, up to six people must wear bespoke 3D glasses and stand in front of a large screen, onto which 3D images are projected. Unlike a 3D movie, where everyone in the audience sees what is projected on the screen from the same angle, the Weimar team's system takes into account your position relative to the display. Sensors on the glasses track each individual's location, movement and even the tilt of their head. In a demo of the system, six participants inspect a full-size projection of Michelangelo's David. Each only sees the perspective that is appropriate to their location, so if they move from left to right, their view of David's profile changes, as if they were walking around the real statue. They can also see each other and interact with the display together, by pointing to it or by manipulating the virtual objects and environment using a tabletop trackpad.

More information:

23 April 2013

3D Brain Decodes Migraine Pain

Wielding a joystick and wearing special glasses, pain researchers from Michigan are hoping to better understand how our brains make their own pain-killing chemicals during a migraine attack. The 3D brain is a novel way to examine data from images taken during a patient's actual migraine attack. 


Different colors in the 3D brain give clues about chemical processes happening during a patient's migraine attack using a PET scan, or positron emission tomography, a type of medical imaging. This high level of immersion effectively places our investigators inside the actual patient's brain image.

More information:

17 April 2013

Game Teaches Java Programming

Computer scientists at the University of California, San Diego, have developed an immersive, first-person player video game designed to teach students in elementary to high school how to program in Java, one of the most common programming languages in use today. The researchers tested the game on a group of 40 girls, ages 10 to 12, who had never been exposed to programming before. They detailed their findings in a paper they presented at the SIGCSE conference in March in Denver. Computer scientists found that within just one hour of play, the girls had mastered some of Java's basic components and were able to use the language to create new ways of playing with the game.


CodeSpells is the only video game that completely immerses programming into the game play. The UC San Diego computer scientists plan to release the game for free and make it available to any educational institution that requests it. Researchers are currently conducting further case studies in San Diego elementary schools. Teaching computer science below the college level is difficult, mainly because it is hard to find qualified instructors for students in elementary to high school.  Researchers designed the game to keep children engaged while they are coping with the difficulties of programming, which could otherwise be frustrating and discouraging.

More information:

16 April 2013

Games Keep Miners Safe

After a series of miscommunications at a surface mine in Ray, Ariz. in 2012, a haul truck, several stories tall and used for transporting enormous loads of ore, rolled over a regular-sized vehicle that was invisible to the driver of the haul truck, killing the driver of the vehicle and injuring another of its two occupants. Fatal accidents happen each year in mines across Arizona, despite ongoing efforts to curb their prevalence by carefully analyzing each accident to find its root cause and instituting new practices to prevent future accidents. Now, UA scientists are stepping in. Funded by grants from the Mine Safety and Health Administration, or MSHA, and the National Institute for Occupational Safety and Health, or NIOSH, and support from Science Foundation Arizona, UA researchers are developing interactive computer games to better train miners to avoid fatal accidents and potential emergencies while working in mines.


After a fatal mining accident, MSHA investigates the events leading up to the incident and produces a report, known as a fatalgram. Each year, these accident reports are used to help train miners to know what types of accidents can occur in a mine and what to do to avoid or avert them. The standard training approach has been a paper packet of information to read through, with summary questions at the end. Hill and Brown are taking a different approach: By allowing miners to play the role of characters in each situation, they can make decisions leading to alternate outcomes and can replay the games as many times as necessary to understand the potential consequences of each decision they make. Researchers created computer games based on the MSHA fatalgram reports, replicating the incidences as playable scenarios in which miners can take the role of individuals involved at the scene and can make decisions that influence the outcome and may lead to avoiding the accident.

More information:

15 April 2013

Fighting Fire Holograms

The use of thermal imaging in fighting fires is 25 years old this year — the first documented life saved by the technology goes back to a New York City fire in 1988. Though it took years for thermal imaging technology to become widespread due to cost, once it was well established in firefighting, a direct connection between their use and the preservation of life was clear. And now, a new device being developed by researchers could further augment this live-saving technology. In Italy, researchers of the Consiglio Nazionale delle Ricerche (CNR) Istituto Nazionale di Ottica (National Research Council - National Institute of Optics), are using hologram technology to create three-dimensional images that would allow firefighters to see through smoke and flames during a rescue.


Though thermal imaging can see through smoke, the presence of flames can obscure objects, such as people in need of rescue. Instead of using lenses to generate an image, the hologram device uses laser beams and something called numerical processing, so the device can see through flames and generate a 3D image of a room. If somehow combined with thermal imaging, the technology could provide yet another layer of information to firefighters. Thermal imaging has three main uses. It can allow firefighters to measure the temperature of a burning building and identify what stage the fire is in. Thermal imaging can help firefighters understand the layout of a building and spot weak structural elements before they fall. Thermal imaging can be used to find victims amid the flames.

10 April 2013

Robot Butler

The Home Exploring Robot Butler (HERB) can often be seen through the glass walls of the Personal Robotics Lab in Newell-Simon Hall picking up iced tea bottles or taking books off a bookshelf. Complete with fingernails and a British accent, HERB is the perfect caregiver: The robot can open doors, microwave meals, and even separate an Oreo cookie from its cream. The focus is on complicated manipulation tasks with a lot of uncertainty and a lot of clutter. If you look at a factory floor, robots can do magical things. But if you look at a home — at least my home — it looks nothing like a factory floor. So researchers are trying to get robots like HERB to move from the factory floor to homes so they can perform useful tasks that a caregiver would perform.


Nevertheless, HERB rose to the challenge and starred in what is now a widely circulated YouTube video documenting the robot’s success in separating an Oreo’s cookie from its cream. Not only is HERB the only research robot to have ever completed the task, but the process through which its algorithms were improved to separate an Oreo resulted in the creation of new tools that will be useful in completing other tasks in the future. Despite HERB’s technical capabilities, the robot still needs to work on his manners. One of the things that we’ve noticed is that we spent five years on improving capability, but people are still hesitant to accept robots in their homes. It’s not just about capability — it’s about behavior, how situationally aware it is, and how sensitive it is to your own personal space.

More information:

07 April 2013

Robot Ants

Scientists have successfully replicated the behaviour of a colony of ants on the move with the use of miniature robots, as reported in the journal PLOS Computational Biology. The researchers, based at the New Jersey Institute of Technology (Newark, USA) and at the Research Centre on Animal Cognition (Toulouse, France), aimed to discover how individual ants, when part of a moving colony, orient themselves in the labyrinthine pathways that stretch from their nest to various food sources. The study focused mainly on how Argentine ants behave and coordinate themselves in both symmetrical and asymmetrical pathways. In nature, ants do this by leaving chemical pheromone trails. This was reproduced by a swarm of sugar cube size robots, called ‘Alices’, leaving light trails that they can detect with two light sensors mimicking the role of the ants' antennae.


In the beginning of the experiment, where branches of the maze had no light trail, the robots adopted an exploratory behaviour modelled on the regular insect movement pattern of moving randomly but in the same general direction. This led the robots to choose the path that deviated least from their trajectory at each bifurcation of the network. If the robots detected a light trail, they would turn to follow that path. One outcome of the robotic model was the discovery that the robots did not need to be programmed to identify and compute the geometry of the network bifurcations. They managed to navigate the maze using only the pheromone light trail and the programmed directional random walk, which directed them to the more direct route between their starting area and a target area on the periphery of the maze.

More information: