28 March 2011

BrainGate: Neural Interface System

An investigational implanted system being developed to translate brain signals toward control of assistive devices has allowed a woman with paralysis to accurately control a computer cursor at 2.7 years after implantation, providing a key demonstration that neural activity can be read out and converted into action for an unprecedented length of time. Demonstrating an important milestone for the longevity and utility of implanted brain-computer interfaces, a woman with tetraplegia using the investigational BrainGate system continued to control a computer cursor accurately through neural activity alone more than 1,000 days after receiving the BrainGate implant, according to a team of physicians, scientists, and engineers developing and testing the technology at Brown University, the Providence VA Medical Center, and Massachusetts General Hospital (MGH).


The woman performed two ‘point-and-click’ tasks each day by thinking about moving the cursor with her hand. In both tasks she averaged greater than 90 percent accuracy. Some on-screen targets were as small as the effective area of a Microsoft Word menu icon. Experimental results highlight the potential for an intracortical neural interface system to provide a person that has locked-in syndrome with reliable, continuous point-and-click control of a standard computer application. The BrainGate system is a combination of hardware and software that directly senses electrical signals produced by neurons in the brain that control movement. By decoding those signals and translating them into digital instructions, the system is being evaluated for its ability to give people with paralysis control of external devices such as computers, robotic assistive devices, or wheelchairs.

More information:

27 March 2011

AR for Learning Chess

Two students from the Terrassa School of Engineering have designed an innovative augmented reality system for learning to play chess that combines augmented reality, computer vision and artificial intelligence. An ordinary webcam, a chess board, a set of 32 pieces and custom software are the key elements in the final degree project of the telecommunications engineering students, from the UPC-Barcelona Tech's Terrassa School of Engineering (EET). The system combines augmented reality, computer vision and artificial intelligence, and the only equipment required is a high-definition home webcam, the Augmented Reality Chess software, a standard board and pieces, and a set of cardboard markers the same size as the squares on the board, each marked with the first letter of the corresponding piece: R for the king (rei in Catalan), D for the queen (dama), T for the rooks (torres), A for the bishops (alfils), C for the knights (cavalls) and P for the pawns (peons).


To use the system, learners play with an ordinary chess board but move the cardboard markers instead of standard pieces. The table is lit from above and the webcam focuses on the board, and every time the player moves one of the markers the system recognises the piece and reproduces the move in 3D on the computer screen, creating a virtual representation of the game. For example, if the learner moves the marker P (pawn), the corresponding piece will be displayed on the screen in 3D, with all of the possible moves indicated. This makes the system particularly suitable for children learning the basics of this board game. The learning tool also incorporates a move-tracking program called Chess Recognition: from the images captured by the webcam, the system instantly recognises and analyses every movement of every piece and can act as a referee, identify illegal moves and provide the players with an audible description of the game status.


More information:

26 March 2011

Search Engine for the Human Body

A new search tool developed by researchers at Microsoft indexes medical images of the human body, rather than the Web. On CT scans, it automatically finds organs and other structures, to help doctors navigate in and work with 3-D medical imagery. CT scans use X-rays to capture many slices through the body that can be combined to create a 3-D representation. This is a powerful tool for diagnosis, but it's far from easy to navigate, researchers mentioned from Microsoft Research Cambridge, U.K. It is very difficult even for someone very trained to get to the place they need to be to examine the source of a problem.

When a scan is loaded into the software, the program indexes the data and lists the organs it finds at the side of the screen, creating a table of hyperlinks for the body. A user can click on, say, the word ‘heart’ and be presented with a clear view of the organ without having to navigate through the imagery manually. Once an organ of interest has been found, a 2D and an enhanced 3D view of structures in the area are shown to the user, who can navigate by touching the screen on which the images are shown. A new scan can also be automatically and precisely matched up alongside a past one from the same patient, making it easy to see how a condition has progressed or regressed.

More information:

http://www.technologyreview.com/computing/35076/?p1=A2

21 March 2011

Music is in the Mind

A pianist plays a series of notes, and the woman echoes them on a computerized music system. The woman then goes on to play a simple improvised melody over a looped backing track. It doesn't sound like much of a musical challenge — except that the woman is paralysed after a stroke, and can make only eye, facial and slight head movements. She is making the music purely by thinking. This is a trial of a computer-music system that interacts directly with the user's brain, by picking up the tiny electrical impulses of neurons. The device, developed by composer and computer-music specialist of the University of Plymouth, UK, working with computer scientists at the University of Essex, should eventually help people with severe physical disabilities, caused by brain or spinal-cord injuries, for example, to make music for recreational or therapeutic purposes. Evidence suggests that musical participation can be beneficial for people with neurodegenerative diseases such as dementia and Parkinson's disease. But people who have almost no muscle movement have generally been excluded from such benefits, and can enjoy music only through passive listening. The development of brain–computer interfaces (BCIs) that enable users to control computer functions by mind alone offer new possibilities for such people. In general, these interfaces rely on the user's ability to learn how to self-induce particular mental states that can be detected by brain-scanning technologies. Researchers have used one of the oldest of these systems: electroencephalography (EEG), in which electrodes on the skull pick up faint neural signals.

The EEG signal can be processed quickly, allowing fast response times, and the instrument is cheaper and more portable than brain-scanning techniques such as magnetic resonance imaging and positron-emission tomography. Previous efforts using BCIs have focused on moving computer screen icons such as cursors, but researchers sought to achieve the much more complex task of enabling users to play and compose music. The trick is to teach the user how to associate particular brain signals with specific tasks by presenting a repeating stimulus — auditory, visual or tactile — and getting the user to focus on it. This elicits a distinctive, detectable pattern in the EEG signal. For example, a button could be used to generate a melody from a preselected set of notes. The user can alter the intensity of the control signal – how 'hard' the button is pressed – by varying the intensity of attention, and the result is fed back to them visually as a change in the button's size. In this way, any one of several notes can be selected by mentally altering the intensity of pressing. With a little practice, this allows users to create a melody as if they were selecting keys on a piano. And, as with learning an instrument the more one practices the better one becomes. The researchers trialled their system on a female patient who has locked-in syndrome, a form of almost total paralysis caused by brain lesions, at the Royal Hospital for Neuro-disability in London. During a two-hour session, she got the hang of the system and was eventually playing along with a backing track. She reported that it was great to be in control again.

More information:

http://www.nature.com/news/2011/110318/full/news.2011.113.html

18 March 2011

Digital Gaming Goes Academic

Educators at Ocoee Middle School in Florida have built an online game lab to engage students and sharpen technology skills. Researchers at Rice University have created a virtual game to teach forensics to middle schoolers. North Carolina State University’s IntelliMedia Group has released a digital game to teach microbiology to 8th graders. Digital games for learning academic skills change depending on each student’s ability and course of action. Such games provide personalized feedback in real time—something a traditional classroom often doesn’t offer. Part of the appeal, and the value, of games is the perspective they bring to students.

Rice University partnered with the Forth Worth Museum of Science and History, the American Academy of Forensic Sciences, and CBS, with funding from the National Science Foundation, to create CSI: Web Adventures, a game designed to introduce middle schoolers to forensic science through cases based on the popular TV-show franchise about crime-scene investigations. During the game, students identify shoe prints, test DNA, and interview suspects in order to crack the case. But it’s not all fun and games. This is because teachers can’t really afford to play games that are interesting but irrelevant.

More information:

http://www.edweek.org/ew/articles/2011/03/17/25gaming.h30.html

15 March 2011

High Anxiety in Virtual Reality

I am in Room 314 in Rekhi Hall, arms spread wide, tippy-toeing across a rickety board and trying oh-so-hard not to fall into a gaping hole beneath my feet. One misstep and I join the dead cow at the bottom of the pit. An assistant professor of computer science at Michigan Technological University is holding an open house, inviting members of his department to experience virtual reality via a setup that includes a headset, cameras and a computer. What researchers see is displayed in two dimensions on a computer monitor: a country road, a deep hole in the pavement, a board. Nearby, cows graze on a hillside. Across the hole, a young woman watches quizzically.

Four cameras, one in each corner of the room, track LEDs on the headset. As you move, the system senses where you are within the virtual world and changes the display within the headset accordingly. It’s not limited to computer-generated imagery, either. Researchers stitched together a series of photos taken in Utah and loaded the image into the lab equipment, creating a 360-degree, 3D view of Canyonlands National Park. The virtual reality lab can help researchers make other virtual reality programs better and could make better simulators and improve performance for everyone from pilots to neurosurgeons.

More information:

http://www.mtu.edu/news/stories/2011/march/story37177.html

07 March 2011

Robots Become Self-Aware

Robots might one day trace the origin of their consciousness to recent experiments aimed at instilling them with the ability to reflect on their own thinking. Although granting machines self-awareness might seem more like the stuff of science fiction than science, there are solid practical reasons for doing so, researchers explain at Cornell University's Computational Synthesis Laboratory. This lack of adaptability is the reason we don't have many robots in the home, which is much more unstructured than the factory. The key is for robots to create a model of themselves to figure out what is working and not working in order to adapt.

So, researchers developed a robot shaped like a four-legged starfish whose brain, or controller, developed a model of what its body was like. The researchers started the droid off with an idea of what motors and other parts it had, but not how they were arranged, and gave it a directive to move. By trial and error, receiving feedback from its sensors with each motion, the machine used repeated simulations to figure out how its body was put together and evolved an ungainly but effective form of movement all on its own. Beyond robots that think about what they are thinking, researchers are also exploring if robots can model what others are thinking.

More information:

http://www.scientificamerican.com/article.cfm?id=automaton-robots-become-self-aware

01 March 2011

Kinect: The New Mouse

The Kinect technology, according to Microsoft’s chief research and strategy officer, is the beginning of a new way of communicating with computers. For the past quarter of a century, computing has mainly meant typing on a keyboard and using a computer mouse to point and click on graphic icons on the screen — the graphical user interface (GUI). Kinect, a $150 add-on to the Xbox game console, points the way to a different model, a natural user interface, or NUI. Increasingly, the computers that surround us will understand our speech and hand gestures. The machines, in essence, will become a bit more human.

Microsoft announced that in the next month or so it would release an initial software developer’s kit for programmers who wanted to make applications using the Kinect technology. The first set of software developer tools is for academics and enthusiasts, who have already begun hacking Kinect to make home-grown applications. The tools will make it easier for them to write more sophisticated programs. The potential uses include inexpensive 3D design and modeling, photo-realistic human avatars and smart displays that might be able to direct two different visual and audio streams to two people sitting in the same room.

More information:

http://bits.blogs.nytimes.com/2011/02/22/microsofts-kinect-the-new-mouse/