29 December 2011

Brain Implants For Paralyzed

It sounds like science fiction, but scientists around the world are getting tantalizingly close to building the mind-controlled prosthetic arms, computer cursors and mechanical wheelchairs of the future. Researchers already have implanted devices into primate brains that let them reach for objects with robotic arms. They've made sensors that attach to a human brain and allow paralyzed people to control a cursor by thinking about it. In the coming decades, scientists say, the field of neural prosthetics - of inventing and building devices that harness brain activity for computerized movement - is going to revolutionize how people who have suffered major brain damage interact with their world. The joint UC Berkeley and UCSF center started a year ago to take advantage of the neurology expertise in San Francisco and the engineering skills across the bay. Such devices that allow the brain to control a device aren't entirely new. Aside from some small steps made at other institutions - the brain-controlled computer cursor, for example - there's the cochlear implant, the first neural prosthetic tool developed and the only one that's ever seen wide use. The cochlear implant, which was invented at UCSF in the 1970s, intercepts sounds as electrical signals and then sends those signals directly to the brain, bypassing the damaged nerves that caused hearing loss. The devices being developed today work under the same premise but are much more complex. Over the past decade, scientists have made leaps of progress in learning how to read and decode the millions of electronic impulses that fire between neurons in the brain, controlling how our bodies move and how we see, feel and relate to the world around us.


It's not enough just to prompt the right muscles to move an arm. Millions of signals in the brain help us determine where our own arm is in relation to our body, so our hand doesn't grope wildly for the glass. Our brains sense that it's a delicate glass that must be picked up carefully, pinched between fingers. The neurons control how fast our arm moves, making sure the wine doesn't slop over the edges. That's an astronomical amount of communication happening, all in fractions of a second, without our even being aware of it. In fact, it's more communication than our best smart-phone technology can handle. The neural prosthetic devices that are just in their infancy now work by connecting a device inserted into the brain directly to a computer. The signals from the brain, in the form of electrical impulses, travel through a cable to the computer, where they are decoded into instructions for some kind of action, like moving a cursor. But for a neural prosthetic device to actually be useful, it would have to be transplanted near or in the brain and transmit wireless signals to a device like a robotic arm. It would need to be able to last forever - or at least a lifetime - on batteries that never have to be changed and won't damage the brain.Other problems are going to require an even deeper understanding of how the brain works. Scientists don't yet know what parts of the brain would be best suited for implanting a device to read electrical signals - or even whether an implanted device would work better than one that's attached to the brain's surface. It's possible that a surface device could collect enough information to be useful in controlling a neural prosthesis with much less risk to the patient.

More information:

http://www.sfgate.com/cgi-bin/article.cgi?file=/c/a/2011/12/27/MNHU1MDLEU.DTL

26 December 2011

Chess Robots Problems

Deep Blue's victory over Gary Kasparov in 1997 may have shown how computers can outsmart people, but if the game is taken into the physical world, humans still win hands down. That's because, for all their software smarts, robots remain clumsy at manipulating real-world objects. A robotic chess competition held in August, for example, showed that even robotic arms used for precise work on industrial manufacturing lines have trouble when asked to negotiate a noisy, chaotic real-world environment. The contest, held at the Association for the Advancement of Artificial Intelligence annual conference in San Francisco, California, had a number of automatons competing to see who could best move pieces quickly, accurately and legally in accordance with the rules of chess.

Some teams used vision systems to identify where pieces were, but none attempted to distinguish between a rook and a knight. Instead they relied upon remembering where pieces were last placed to identify them and move them accordingly. The bots quickly ran into snags - their vision systems often misread moves. One approach, by robotics company Road Narrows, used a commercially available fixed robotic arm normally used for light industrial applications without any vision at all. The winner was a team of researchers at the University of Albany, in New York, which had a mobile robot with an arm attached. Despite the many variables introduced when moving a robot around, the droid's vision system managed to keep track of the board and pieces as it moved about.

More information:

http://www.newscientist.com/blogs/onepercent/2011/12/chess-robots-have-trouble-gras.html

22 December 2011

An Ultrafast Imaging System

More than 70 years ago, the M.I.T. electrical engineer Harold (Doc) Edgerton began using strobe lights to create remarkable photographs: a bullet stopped in flight as it pierced an apple, the coronet created by the splash of a drop of milk. Now scientists at M.I.T.’s Media Lab are using an ultrafast imaging system to capture light itself as it passes through liquids and objects, in effect snapping a picture in less than two-trillionths of a second. The project began as a whimsical effort to literally see around corners — by capturing reflected light and then computing the paths of the returning light, thereby building images coming from rooms that would otherwise not be directly visible. Researchers modified a streak tube, a supersensitive piece of laboratory equipment that scans and captures light. Streak tubes are generally used to intensify streams of photons into streams of electrons. They are fast enough to record the progress of packets of laser light fired repeatedly into a bottle filled with a cloudy fluid. The instrument is normally used to measure laboratory phenomena that take place in an ultra-short timeframe. Typically, it offers researchers information on intensity, position and wavelength in the form of data, not an image.

By modifying the equipment, the researchers were able to create slow-motion movies, showing what appears to be a bullet of light that moves from one end of the bottle to the other. The pulses of laser light enter through the bottom and travel to the cap, generating a conical shock wave that bounces off the sides of the bottle as the bullet passes. The streak tube scans and captures light in much the same way a cathode ray tube emits and paints an image on the inside of a computer monitor. Each horizontal line is exposed for just 1.71 picoseconds, or trillionths of a second, enough time for the laser beam to travel less than half a millimeter through the fluid inside the bottle. To create a movie of the event, the researchers record about 500 frames in just under a nanosecond, or a billionth of a second. Because each individual movie has a very narrow field of view, they repeat the process a number of times, scanning it vertically to build a complete scene that shows the beam moving from one end of the bottle, bouncing off the cap and then scattering back through the fluid. If a bullet were tracked in the same fashion moving through the same fluid, the resulting movie would last three years.


More information:

http://www.nytimes.com/2011/12/13/science/speed-of-light-lingers-in-face-of-mit-media-lab-camera.html?_r=1

21 December 2011

Virus for The Human Mind

The field of 'synthetic biology' is in its infancy. Experts working within the field believe that our expertise is out-accelerating natural evolution by a factor of millions of years - and some warn that synthetic biology could spin out of control. It could lead to a world where hackers could engineer viruses or bacteria to control human minds.

Researchers predict a world where we can 'print' DNA, and even 'decode' it. A literal virus - injected into a 'host' in the guise of a vaccine, say - could be used to control behaviour. Synthetic biology will lead to new forms of bioterrorism. Bio-crime today is akin to computer crime in the early Eighties, Few initially recognized the problem - but it grew exponentially.

More information:

http://www.dailymail.co.uk/sciencetech/article-2073936/Could-hackers-develop-virus-infect-human-mind.html

18 December 2011

Humanizing The Human-Computer Interface

Researchers at Toyohashi Tech’s Graduate School of Engineering try to ‘humanize’ the computer interface. They work on the expansion of human-computer communication by means of a web-based multimodal interactive (MMI) approach employing speech, gesture and facial expressions, as well as the traditional keyboard and mouse. Although many MMI systems have been tried, few are widely used. Some reasons for this lack of use are their complexity of installation and compilation, and their general inaccessibility for ordinary computer users. To resolve these issues we have designed a web browser-based MMI system that only uses open source software and de facto standards.

This openness has the advantage that it can be executed on any web browser, handle JavaScript, Java applets and Flash, and can be used not only on a PC but also on mobile devices like smart phones and tablet computers. The user can interact with the system by speaking directly with an anthropomorphic agent that employs speech recognition, speech synthesis and facial image synthesis. For example, a user can recite a telephone number, which is recorded by the computer and the data sent via the browser to a session manager on the server housing the MMI system. The data is processed by the speech recognition software and sent to a scenario interpreter.

More information:

http://www.physorg.com/news/2011-12-multimodal-interaction-humanizing-human-computer-interface.html

13 December 2011

BCIs Play Music Based on Moods

Scientists are developing a brain-computer interface (BCI) that recognises a person’s affective state and plays music to them based on their mood. The duo from the universities of Reading and Plymouth believe the system could be used as a therapeutic aid for people suffering with certain forms of depression. Scientists are not asking the subject to be happy or sad. They want to recognise the subject’s state so we can provide the right stimulus.


The subject is not in control and this is a very unique feature. Traditionally, the user has had complete control over how a BCI system responds. The project would use an electroencephalograph (EEG) to transfer the electrical signal from the patient’s scalp via a series of wires to an amplifier box, which, in turn, would be connected to a computer. The computer would then generate its own synthetic music based on the user’s mental state.

More information:

http://www.theengineer.co.uk/sectors/medical-and-healthcare/news/brain-computer-interface-plays-music-based-on-persons-mood/1011153.article

07 December 2011

Brain Limiting Global Data Growth

In the early 19th century, the German physiologist Ernst Weber discovered that the smallest increase in weight a human can perceive is proportional to the initial mass. This is now known as the Weber-Fechner law and shows that the relationship between the stimulus and perception is logarithmic. It's straightforward to apply this rule to modern media. Take images for example. An increase in resolution of a low resolution picture is more easily perceived than the same increase to a higher resolution picture. When two parameters are involved, the relationship between the stimuli and perception is the square of the logarithm. This way of thinking about stimulus and perception clearly indicates that the Weber-Fechner law ought to have a profound effect on the rate at which we absorb information.


Today, researchers at Goethe University Frankfurt in Germany look for signs of the Weber-Fechner law in the size distribution of files on the internet. They measured the type and size of files pointed to by every outward link from Wikipedia and the open directory project, dmoz.org. That's a total of more than 600 million files. Some 58 per cent of these pointed to image files, 32 per cent to application files, 5 per cent to text files, 3 per cent to audio and 1 per cent to video files. They discovered that the audio and video file distribution followed a log-normal curve, which is compatible with a logarithmic squared-type relationship. By contrast, image files follow a power law distribution, which is compatible with a logarithmic relationship. That's exactly as the Weber-Fechner law predicts.

More information:

http://www.technologyreview.com/blog/arxiv/27379/?p1=blogs

01 December 2011

Robots in Reality

Consider the following scenario: A scout surveys a high-rise building that’s been crippled by an earthquake, trapping workers inside. After looking for a point of entry, the scout carefully navigates through a small opening. An officer radios in, Go look down that corridor and tell me what you see’. The scout steers through smoke and rubble, avoiding obstacles and finding two trapped people, reporting their location via live video. A SWAT team is then sent to lead the workers safely out of the building. Despite its heroics, though, the scout is impervious to thanks. It just sets its sights on the next mission, like any robot would do. In the not-too-distant future, such robotics-driven missions will be a routine part of disaster response, researchers at MIT predict. Robots are ideal for dangerous and covert tasks, such as navigating nuclear disasters or spying on enemy camps.


They can be small and resilient — but more importantly, they can save valuable manpower. The key hurdle to such a scenario is robotic intelligence: Flying through unfamiliar territory while avoiding obstacles is an incredibly complex computational task. Understanding verbal commands in natural language is even trickier. Researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), are designing robotic systems that do more things intelligently by themselves. For instance, the team is building micro-aerial vehicles (MAVs), about the size of a small briefcase, that navigate independently, without the help of a global positioning system (GPS). Most drones depend on GPS to get around, which limits the areas they can cover. The group is also building social robots that understand natural language.

More information:

http://web.mit.edu/newsoffice/2011/profile-roy-1128.html