29 April 2009

Simulated Brain Closer to Thought

A detailed simulation of a small region of a brain built molecule by molecule has been constructed and has recreated experimental results from real brains. The ‘Blue Brain’ has been put in a virtual body, and observing it gives the first indications of the molecular and neural basis of thought and memory. Scaling the simulation to the human brain is only a matter of money. The work was presented at the European Future Technologies meeting in Prague. The Blue Brain project launched in 2005 as the most ambitious brain simulation effort ever undertaken. While many computer simulations have attempted to code in ‘brain-like’ computation or to mimic parts of the nervous systems and brains of a variety of animals, the Blue Brain project was conceived to reverse-engineer mammal brains from real laboratory data and to build up a computer model down to the level of the molecules that make them up. The first phase of the project is now complete; researchers have modeled the neocortical column - a unit of the mammalian brain known as the neocortex which is responsible for higher brain functions and thought.

The thing about the neocortical column is that you can think of it as an isolated processor. It is very much the same from mouse to man - it gets a bit larger a bit wider in humans, but the circuit diagram is very similar. The column is being integrated into a virtual reality agent - a simulated animal in a simulated environment, so that the researchers will be able to observe the detailed activities in the column as the animal moves around the space. It starts to learn things and starts to remember things. We can actually see when it retrieves a memory, and where they retrieved it from because we can trace back every activity of every molecule, every cell, every connection and see how the memory was formed. The next phase of the project will make use of a more advanced version of the IBM Blue Gene supercomputer that was used in the research to date. The next phase is beginning with a 'molecularisation' process: we add in all the molecules and biochemical pathways to move toward gene expression and gene networks.

More information:


26 April 2009

Using EEG to Send Tweets

Taking a revolutionary step towards modern communication technologies, a doctoral student posted a status update on the social networking Web site Twitter - only by thinking about it. The message-"using EEG to send tweet"- just 23 characters long, demonstrated a natural, manageable way in which "locked-in" patients can couple brain-computer interface technologies with modern communication tools. The researcher from University of Wisconsin-Madison biomedical engineering doctoral student is among a growing group of researchers worldwide who aim to perfect a communication system for users whose bodies do not work, but whose brains function normally. Such people include those having amyotrophic lateral sclerosis (ALS), brain-stem stroke or high spinal cord injury.

There are brain-computer interface systems that employ an electrode-studded cap wired to a computer, in which the electrodes detect electrical signals in the brain - essentially, thoughts - and translate them into physical actions, such as a cursor motion on a computer screen. Based on brain activity related to changes in an object on screen, the researchers embarked upon developing a simple, elegant communication interface. The interface consists of a keyboard displayed on a computer screen. The way this works is that all the letters come up, and each one of them flashes individually. And what the brain does is, if you're looking at the 'R' on the screen and all the other letters are flashing, nothing happens. But when the 'R' flashes, the brain understands that something's different about what it was just paying attention to.

More information:


22 April 2009

Smartphone Ultrasound Imaging

Computer engineers at Washington University in St. Louis are bringing the minimalist approach to medical care and computing by coupling USB-based ultrasound probe technology with a smartphone, enabling a compact, mobile computational platform and a medical imaging device that fits in the palm of a hand. Researchers have made commercial USB ultrasound probes compatible with Microsoft Windows mobile-based smartphones, thanks to a $100,000 grant Microsoft. In order to make commercial USB ultrasound probes work with smartphones, the researchers had to optimize every aspect of probe design and operation, from power consumption and data transfer rate to image formation algorithms. As a result, it is now possible to build smartphone-compatible USB ultrasound probes for imaging the kidney, liver, bladder and eyes, endocavity probes for prostate and uterine screenings and biopsies, and vascular probes for imaging veins and arteries for starting IVs and central lines. Both medicine and global computer use will never be the same. You can carry around a probe and cell phone and image on the fly. Twenty-first century medicine is defined by medical imaging and yet 70 percent of the world's population has no access to medical imaging. It's hard to take an MRI or CT scanner to a rural community without power.

The new system is to train people in remote areas of the developing world on the basics of gathering data with the phones and sending it to a centralized unit many miles, or half a world away where specialists can analyze the image and make a diagnosis. Another promising application is for caregivers of patients with Duchene's Muscular Dystrophy. A degenerative disease that often strikes young boys and robs them of their lives by their late 20s, DMD is a degenerative disease for which there is no cure. The leading treatment to slow its progression is a daily dose of steroids. Patients often experience some side effects from steroids, which are dose related. These side effects include behavioral problems and weight gain. Researchers now know that physical changes in muscle tissue can indicate the efficacy of the steroids. Measuring these changes in muscle can be accomplished with ultrasound and may allow researchers to optimize steroid dosing to maximize efficacy while minimizing side effects. One such application could find its way to the military. Medics could quickly diagnose wounded soldiers with the small, portable probe and phone to detect quickly the site of shrapnel wounds in order to make the decision of transporting the soldier or treating him elsewhere on the field.

More information:


20 April 2009

Eye Movement Robot Communication

Social referencing is the ability to communicate with nonverbal signals. Children, in particular, learn much from the expressions of adults in new situations-whether to be frightened, happy sad etc. Nonverbal communication is important for everybody but in its purest form, perfected by many a primary school teacher, it is possible to control young children with eyebrow movements alone (a skill sadly lacking in many workplaces). Now nonverbal communication is being roboticised by a team of researchers at the Tokyo Institute of Technology. The team has built an ‘eye robot’ consisting of nothing more than a pair of eyeballs capable of conveying a wide range of nonverbal signals. The proposed system provides a user friendly interface so that humans and robots communicate in natural fashion. It's not hard to create expressions with synthetic eyes.

The difficulty for a computer is in knowing what kind of message this expression conveys and when to use it. The team has worked this out by setting up the device to produce expressions at random and then asking viewers to evaluate each expression. Using the results of these questionnaires, the team has created a ‘mentality space’ for expressions. Users talk to the eye robot which evaluates the conversation using a speech recognition program and then selects an appropriate eye expression from this space. Clearly, eye expression is an important part of the nonverbal communication that goes on between humans. Crack this code and the team could have a winner on its hands. But while it is relatively straightforward to make eyes that look happy or sad, it will be much harder to create synthetic eyeballs that can hold their own in a nonverbal conversation.

More information:


15 April 2009

Natural Barcode Face Recognition

Our faces contain ‘barcodes’ of information which help us recognise people and may have implications for improving face recognition software, according to a study of the UCL Institute of Ophthalmology. Faces are unique in their ability to convey a vast range of information about people, including their gender, age, and mood. For social animals, such as humans, the ability to locate a face is important as this is where we pick up many of our cues for social interactions. While recognising a person’s face is a complex process, the first steps to processing visual information in the brain are thought to be more basic and to rely on the orientation of features such as lines. By manipulating images of the faces of celebrities such as Coldplay’s Chris Martin (incidentally, a UCL Greek & Latin alumnus) and actor George Clooney, the University of Stirling showed that nearly all the information we need to recognise faces is contained in horizontal lines, such as the line of the eyebrows, the eyes and the lips. Further analysis revealed that these features could be simplified into black and white lines of information – in other words, barcodes. Exposed skin on our forehead and cheeks tends to be shiny whilst our eyebrows and lips and the shadows cast in the eye sockets and under the nose tend to be darker.

The resulting horizontal stripes of information are reminiscent of a supermarket barcode. Supermarket barcodes were developed as an efficient way of providing information: straight, one-dimensional lines are far easier to process than two-dimensional characters such as numbers. In a similar way, our faces may have evolved to allow us to convey effectively the information needed to recognise them. Researchers analysed various natural images, such as flowers and landscapes, and found that faces are unique in conveying all their useful information in horizontal stripes. The barcode pattern has many advantages: it can be recognised efficiently by the visual parts of the brain; is easy to locate in complex scenes; and appears to be resistant to changes in the overall appearance of the face. The research may have implications for improving face recognition software, for example, for use at an airport where police may need to locate a suspect in a crowd on CCTV cameras. The ability of such software to recognise individuals has improved vastly, but is still poor at the first step: locating faces in complex scenes. The research may also help explain our ability to see faces where they do not exist, for example in clouds or in flames.

More information:


11 April 2009

Sensitive Robots

Robots are commonplace in production halls, but are only allowed to operate in protected areas so as not to endanger humans with their movements. A new cost-efficient, robust force sensor can make robots sensitive to potential collisions. The arm of the industrial robot steadily approaches the employee, who is so absorbed in his work that he doesn’t notice – a risky situation. But as soon as the robot even slightly touches the person, it immediately retracts its steel arm. This vision could soon become reality thanks to a cost-efficient force and torque sensor developed by research scientists at the Fraunhofer Institute for Silicon Technology ISIT in Itzehoe. The sensor sits on the outer joint of the robot’s arm. Glued onto a steel plate, the transducer, it can be screwed in between the arm and the grabber. Equipped with the new sensors, robotic assistants would be sufficiently trustworthy to work alongside their human colleagues – something that has been prohibited until now for safety reasons. The sensor measures the forces and torques exerted by the robot arm. It functions in a similar way to a strain gauge: its core element is a long wire through which an electric current flows.

If the wire stretches, it becomes longer and thinner – the resistance increases and so less current flows through it. This sensor is made from a single square piece of silicon. On each side, bridges carrying electrical resistances were incorporated. If the robot arm bumps into an obstacle, the shape of the silicon changes very slightly – by just a few micrometers, to be precise. This causes either more or less current to flow, depending on whether a bridge has been stretched or buckled. Because the sensor consists of just a single piece of silicon, it is less error-prone than its conventional counterparts. Manufacturers normally glue the resistances on separately, which means they are often positioned somewhat inaccurately. There is no chance of this happening in the case of this sensor. The resistances are precisely aligned. The system’s size can be varied. The sensor can also help to program a robot. In learning mode, it measures the force with which the employee guides the robot arm. Instead of laboriously entering the coordinates of the movements into the computer, the employee can simply guide the robot by touch and teach it the required motion sequences in this way.

More information:


08 April 2009

Multitouch Interfaces

Over the past few years, the world has fallen in love with multitouch displays. But today's consumer interfaces have some drawbacks: touch screens such as those on the iPhone and Plastic Logic's upcoming e-reader only work with a finger, not a stylus or even a gloved hand. Other displays, such as Microsoft's Surface and Perceptive Pixel's wall-sized screens, are rigid, relatively expensive, and currently fairly bulky. New research from New York University, however, promises to make multitouch interfaces that are cheap and flexible and can be used by fingers and objects alike. The technology, called Inexpensive Multi-Touch Pressure Acquisition Devices (IMPAD), can be made paper thin, can easily scale down to fit on small portable devices, or can scale up to cover an entire table or wall. The iPhone captures information about touch by measuring a change in capacitance when a finger or other conducting object comes in contact with the display. Surface screens use cameras to see the position of objects on the tabletop. Perceptive Pixel's displays also use cameras, but in a different way. Those cameras are used to track infrared light as it scatters in the presence of a finger or stylus. While Perceptive Pixel's touch screens collect pressure information, it's still impractical to use cameras for smaller or touch interfaces. IMPAD takes a different approach by measuring a change in electrical resistance when a person or object applies different pressure to a specially designed pad, consisting of only a few layers of materials.

One of the problems that have been endemic to multitouch sensors is that you're either touching it or not touching it. A significant amount of potentially useful information is thrown away because the sensor isn't capturing the subtleties. But with a pressure-sensitive touch pad, a device can see how hard a person presses, opening up another dimension of the user interface. The researchers have shown that their pressure-sensitive touch pad can be used for virtual sculpting and painting applications and for a simulated mouse with left clicks, right clicks, and drags, as well as for musical instruments like a piano keyboard. The hardware that composes the demonstrated prototype is relatively straightforward. It consists of two plastic sheets, about 8 inches by 10 inches, each with parallel lines of electrodes, spaced a quarter inch apart. The sheets are arranged so that the electrodes cross, creating a grid; each intersection is essentially a pressure sensor. Crucially, both sheets are covered with a layer of force sensitive resistor (FSR) ink, a type of ink that has microscopic bumps on its surface. When something coated in the ink is pressed, the bumps move together and touch, conducting electricity. The harder you press, the more it conducts. In making their touch pad, the researchers had to ensure that the pad could detect the exact placement of a finger even though the sensors are a quarter inch apart--something that designers of the electronic instruments didn't need to consider.

More information:


07 April 2009

Human Robot Relationships

Tank is a receptionist. He's competent at his job, greeting visitors at Carnegie Mellon's School of Computer Science and giving directions. But sometimes he can be whiny and not all that helpful. Ask him how he is, and you'll get an earful: Tank, it turns out, was a CIA operative in Iraq before launching a new career in the service industry. There's another unusual thing about Tank: He's not human. He's a robot. He looks like a Shop Vac with a flat panel monitor for a head. But his face is human and expressive, and his eyes move from side to side. To communicate with him, people type their messages on a keyboard, and he responds in a smooth monotone. After the chat, he leaves them with these parting words of wisdom: ‘Next time your computer isn't working, don't hit it. We have feelings, too’. Five minutes with Tank, and one starts to believe it's true. People tend to humanize their machines. They've been known to yell at their computers when they malfunction, or at least talk to them. In another Carnegie Mellon project, a researcher in human-computer interaction is studying the way families behaved when a robotic vacuum cleaner was introduced into the household. The study involved six families in the Pittsburgh and Harrisburg areas. Half were given Roombas -- a robot vacuum cleaner made by iRobot -- and the other half was given Flair handheld upright vacuums. The families with the Flair showed little or no change in the way they cleaned, and some lost interest in using it. But the robot changed routines. In most households, women had handled the bulk of the cleaning chores. But men and children got interested when they could use the Roomba.

The university's Social Robots Project investigates human-robot social interaction and long-term relationships between human and machine. The robots are designed to perform practical services, such as giving visitors directions. But they also are given personalities. While Tank is a product of the robotics and computer science departments, he crosses disciplines. Students in the drama department develop an ongoing story line and write dialogue for Tank. The soap opera/serial stories help reinforce the evolving character and compel people to keep up with what's new. In one chapter of his life, he was excited about an upcoming date. It didn't go well, and the robot was depressed the following week. If someone types in a swear word or is abusive, Tank will be in a bad mood. When that happens, visitors react to him differently. Regulars familiar with the robot will stop and chat, while strangers will avoid the robot. Humanizing technology is important, especially when developing robots to work with people who aren't used to interacting with technology, such as the elderly. Medical-service robots could help care for frail seniors -- reminding them to take medications, helping with physical therapy exercises at home or navigating with automated wheelchairs or walkers. Interactive computer technology also is working with the young. CMU's Project LISTEN (Literacy Innovation that Speech Technology Enables) uses computers to tutor elementary school children. Students read aloud to the automated tutor, which senses when they pause or stumble over words, and helps them through the problem passage. Aimed at helping struggling readers, it also rewards success with verbal praise.

More information: