29 December 2011

Brain Implants For Paralyzed

It sounds like science fiction, but scientists around the world are getting tantalizingly close to building the mind-controlled prosthetic arms, computer cursors and mechanical wheelchairs of the future. Researchers already have implanted devices into primate brains that let them reach for objects with robotic arms. They've made sensors that attach to a human brain and allow paralyzed people to control a cursor by thinking about it. In the coming decades, scientists say, the field of neural prosthetics - of inventing and building devices that harness brain activity for computerized movement - is going to revolutionize how people who have suffered major brain damage interact with their world. The joint UC Berkeley and UCSF center started a year ago to take advantage of the neurology expertise in San Francisco and the engineering skills across the bay. Such devices that allow the brain to control a device aren't entirely new. Aside from some small steps made at other institutions - the brain-controlled computer cursor, for example - there's the cochlear implant, the first neural prosthetic tool developed and the only one that's ever seen wide use. The cochlear implant, which was invented at UCSF in the 1970s, intercepts sounds as electrical signals and then sends those signals directly to the brain, bypassing the damaged nerves that caused hearing loss. The devices being developed today work under the same premise but are much more complex. Over the past decade, scientists have made leaps of progress in learning how to read and decode the millions of electronic impulses that fire between neurons in the brain, controlling how our bodies move and how we see, feel and relate to the world around us.


It's not enough just to prompt the right muscles to move an arm. Millions of signals in the brain help us determine where our own arm is in relation to our body, so our hand doesn't grope wildly for the glass. Our brains sense that it's a delicate glass that must be picked up carefully, pinched between fingers. The neurons control how fast our arm moves, making sure the wine doesn't slop over the edges. That's an astronomical amount of communication happening, all in fractions of a second, without our even being aware of it. In fact, it's more communication than our best smart-phone technology can handle. The neural prosthetic devices that are just in their infancy now work by connecting a device inserted into the brain directly to a computer. The signals from the brain, in the form of electrical impulses, travel through a cable to the computer, where they are decoded into instructions for some kind of action, like moving a cursor. But for a neural prosthetic device to actually be useful, it would have to be transplanted near or in the brain and transmit wireless signals to a device like a robotic arm. It would need to be able to last forever - or at least a lifetime - on batteries that never have to be changed and won't damage the brain.Other problems are going to require an even deeper understanding of how the brain works. Scientists don't yet know what parts of the brain would be best suited for implanting a device to read electrical signals - or even whether an implanted device would work better than one that's attached to the brain's surface. It's possible that a surface device could collect enough information to be useful in controlling a neural prosthesis with much less risk to the patient.

More information:

http://www.sfgate.com/cgi-bin/article.cgi?file=/c/a/2011/12/27/MNHU1MDLEU.DTL

26 December 2011

Chess Robots Problems

Deep Blue's victory over Gary Kasparov in 1997 may have shown how computers can outsmart people, but if the game is taken into the physical world, humans still win hands down. That's because, for all their software smarts, robots remain clumsy at manipulating real-world objects. A robotic chess competition held in August, for example, showed that even robotic arms used for precise work on industrial manufacturing lines have trouble when asked to negotiate a noisy, chaotic real-world environment. The contest, held at the Association for the Advancement of Artificial Intelligence annual conference in San Francisco, California, had a number of automatons competing to see who could best move pieces quickly, accurately and legally in accordance with the rules of chess.

Some teams used vision systems to identify where pieces were, but none attempted to distinguish between a rook and a knight. Instead they relied upon remembering where pieces were last placed to identify them and move them accordingly. The bots quickly ran into snags - their vision systems often misread moves. One approach, by robotics company Road Narrows, used a commercially available fixed robotic arm normally used for light industrial applications without any vision at all. The winner was a team of researchers at the University of Albany, in New York, which had a mobile robot with an arm attached. Despite the many variables introduced when moving a robot around, the droid's vision system managed to keep track of the board and pieces as it moved about.

More information:

http://www.newscientist.com/blogs/onepercent/2011/12/chess-robots-have-trouble-gras.html

22 December 2011

An Ultrafast Imaging System

More than 70 years ago, the M.I.T. electrical engineer Harold (Doc) Edgerton began using strobe lights to create remarkable photographs: a bullet stopped in flight as it pierced an apple, the coronet created by the splash of a drop of milk. Now scientists at M.I.T.’s Media Lab are using an ultrafast imaging system to capture light itself as it passes through liquids and objects, in effect snapping a picture in less than two-trillionths of a second. The project began as a whimsical effort to literally see around corners — by capturing reflected light and then computing the paths of the returning light, thereby building images coming from rooms that would otherwise not be directly visible. Researchers modified a streak tube, a supersensitive piece of laboratory equipment that scans and captures light. Streak tubes are generally used to intensify streams of photons into streams of electrons. They are fast enough to record the progress of packets of laser light fired repeatedly into a bottle filled with a cloudy fluid. The instrument is normally used to measure laboratory phenomena that take place in an ultra-short timeframe. Typically, it offers researchers information on intensity, position and wavelength in the form of data, not an image.

By modifying the equipment, the researchers were able to create slow-motion movies, showing what appears to be a bullet of light that moves from one end of the bottle to the other. The pulses of laser light enter through the bottom and travel to the cap, generating a conical shock wave that bounces off the sides of the bottle as the bullet passes. The streak tube scans and captures light in much the same way a cathode ray tube emits and paints an image on the inside of a computer monitor. Each horizontal line is exposed for just 1.71 picoseconds, or trillionths of a second, enough time for the laser beam to travel less than half a millimeter through the fluid inside the bottle. To create a movie of the event, the researchers record about 500 frames in just under a nanosecond, or a billionth of a second. Because each individual movie has a very narrow field of view, they repeat the process a number of times, scanning it vertically to build a complete scene that shows the beam moving from one end of the bottle, bouncing off the cap and then scattering back through the fluid. If a bullet were tracked in the same fashion moving through the same fluid, the resulting movie would last three years.


More information:

http://www.nytimes.com/2011/12/13/science/speed-of-light-lingers-in-face-of-mit-media-lab-camera.html?_r=1

21 December 2011

Virus for The Human Mind

The field of 'synthetic biology' is in its infancy. Experts working within the field believe that our expertise is out-accelerating natural evolution by a factor of millions of years - and some warn that synthetic biology could spin out of control. It could lead to a world where hackers could engineer viruses or bacteria to control human minds.

Researchers predict a world where we can 'print' DNA, and even 'decode' it. A literal virus - injected into a 'host' in the guise of a vaccine, say - could be used to control behaviour. Synthetic biology will lead to new forms of bioterrorism. Bio-crime today is akin to computer crime in the early Eighties, Few initially recognized the problem - but it grew exponentially.

More information:

http://www.dailymail.co.uk/sciencetech/article-2073936/Could-hackers-develop-virus-infect-human-mind.html

18 December 2011

Humanizing The Human-Computer Interface

Researchers at Toyohashi Tech’s Graduate School of Engineering try to ‘humanize’ the computer interface. They work on the expansion of human-computer communication by means of a web-based multimodal interactive (MMI) approach employing speech, gesture and facial expressions, as well as the traditional keyboard and mouse. Although many MMI systems have been tried, few are widely used. Some reasons for this lack of use are their complexity of installation and compilation, and their general inaccessibility for ordinary computer users. To resolve these issues we have designed a web browser-based MMI system that only uses open source software and de facto standards.

This openness has the advantage that it can be executed on any web browser, handle JavaScript, Java applets and Flash, and can be used not only on a PC but also on mobile devices like smart phones and tablet computers. The user can interact with the system by speaking directly with an anthropomorphic agent that employs speech recognition, speech synthesis and facial image synthesis. For example, a user can recite a telephone number, which is recorded by the computer and the data sent via the browser to a session manager on the server housing the MMI system. The data is processed by the speech recognition software and sent to a scenario interpreter.

More information:

http://www.physorg.com/news/2011-12-multimodal-interaction-humanizing-human-computer-interface.html

13 December 2011

BCIs Play Music Based on Moods

Scientists are developing a brain-computer interface (BCI) that recognises a person’s affective state and plays music to them based on their mood. The duo from the universities of Reading and Plymouth believe the system could be used as a therapeutic aid for people suffering with certain forms of depression. Scientists are not asking the subject to be happy or sad. They want to recognise the subject’s state so we can provide the right stimulus.


The subject is not in control and this is a very unique feature. Traditionally, the user has had complete control over how a BCI system responds. The project would use an electroencephalograph (EEG) to transfer the electrical signal from the patient’s scalp via a series of wires to an amplifier box, which, in turn, would be connected to a computer. The computer would then generate its own synthetic music based on the user’s mental state.

More information:

http://www.theengineer.co.uk/sectors/medical-and-healthcare/news/brain-computer-interface-plays-music-based-on-persons-mood/1011153.article

07 December 2011

Brain Limiting Global Data Growth

In the early 19th century, the German physiologist Ernst Weber discovered that the smallest increase in weight a human can perceive is proportional to the initial mass. This is now known as the Weber-Fechner law and shows that the relationship between the stimulus and perception is logarithmic. It's straightforward to apply this rule to modern media. Take images for example. An increase in resolution of a low resolution picture is more easily perceived than the same increase to a higher resolution picture. When two parameters are involved, the relationship between the stimuli and perception is the square of the logarithm. This way of thinking about stimulus and perception clearly indicates that the Weber-Fechner law ought to have a profound effect on the rate at which we absorb information.


Today, researchers at Goethe University Frankfurt in Germany look for signs of the Weber-Fechner law in the size distribution of files on the internet. They measured the type and size of files pointed to by every outward link from Wikipedia and the open directory project, dmoz.org. That's a total of more than 600 million files. Some 58 per cent of these pointed to image files, 32 per cent to application files, 5 per cent to text files, 3 per cent to audio and 1 per cent to video files. They discovered that the audio and video file distribution followed a log-normal curve, which is compatible with a logarithmic squared-type relationship. By contrast, image files follow a power law distribution, which is compatible with a logarithmic relationship. That's exactly as the Weber-Fechner law predicts.

More information:

http://www.technologyreview.com/blog/arxiv/27379/?p1=blogs

01 December 2011

Robots in Reality

Consider the following scenario: A scout surveys a high-rise building that’s been crippled by an earthquake, trapping workers inside. After looking for a point of entry, the scout carefully navigates through a small opening. An officer radios in, Go look down that corridor and tell me what you see’. The scout steers through smoke and rubble, avoiding obstacles and finding two trapped people, reporting their location via live video. A SWAT team is then sent to lead the workers safely out of the building. Despite its heroics, though, the scout is impervious to thanks. It just sets its sights on the next mission, like any robot would do. In the not-too-distant future, such robotics-driven missions will be a routine part of disaster response, researchers at MIT predict. Robots are ideal for dangerous and covert tasks, such as navigating nuclear disasters or spying on enemy camps.


They can be small and resilient — but more importantly, they can save valuable manpower. The key hurdle to such a scenario is robotic intelligence: Flying through unfamiliar territory while avoiding obstacles is an incredibly complex computational task. Understanding verbal commands in natural language is even trickier. Researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), are designing robotic systems that do more things intelligently by themselves. For instance, the team is building micro-aerial vehicles (MAVs), about the size of a small briefcase, that navigate independently, without the help of a global positioning system (GPS). Most drones depend on GPS to get around, which limits the areas they can cover. The group is also building social robots that understand natural language.

More information:

http://web.mit.edu/newsoffice/2011/profile-roy-1128.html

28 November 2011

Stonehenge Hidden Landscapes

Archaeologists at the University of Birmingham are heading to Stonehenge to lead the Britain’s biggest-ever virtual excavation, a far from superficial look at the Stonehenge landscape. The Stonehenge Hidden Landscapes Project will use the latest geophysical imaging techniques to visually recreate the iconic prehistoric monument and its surroundings as it was more than 4000 years ago. The project begins midway through one of Stonehenge’s busiest tourist seasons for years. With more than 750,000 visitors annually, the site is one of the UK’s most popular tourist hotspots.


The Stonehenge Hidden Landscapes Project, started early July, aims to bring together the most sophisticated geophysics team ever to be engaged in a single archaeological project in Britain to work alongside specialists in British prehistory and landscape archaeology in a three-year collaboration. The scientists will map the Wiltshire terrain as well as virtually excavate it, accurately pinpointing its buried archaeological remains. When processed, the millions of measurements will be analysed and even incorporated into gaming technology to produce 2D and 3D images.

More information:

http://heritage-key.com/blogs/ann/stonehenge-hidden-landscapes-project-virtual-excavation-digital-recreation

25 November 2011

Contact Lens Displays Pixels on Eyes

The future of augmented-reality technology is here - as long as you're a rabbit. Bioengineers have placed the first contact lenses containing electronic displays into the eyes of rabbits as a first step on the way to proving they are safe for humans. The bunnies suffered no ill effects, the researchers say. The first version may only have one pixel, but higher resolution lens displays - like those seen in Terminator - could one day be used as satnav enhancers showing you directional arrows for example, or flash up texts and emails - perhaps even video. In the shorter term, the breakthrough also means people suffering from conditions like diabetes and glaucoma may find they have a novel way to monitor their conditions. The test lens was powered remotely using a 5-millimetre-long antenna printed on the lens to receive gigahertz-range radio-frequency energy from a transmitter placed ten centimetres from the rabbit's eye.


To focus the light on the rabbit's retina, the contact lens itself was fabricated as a Fresnel lens - in which a series of concentric annular sections is used to generate the ultrashort focal length needed. They found their lens LED glowed brightly up to a metre away from the radio source in free space, but needed to be 2 centimetres away when the lens was placed in a rabbit's eye and the wireless reception was affected by body fluids. All the 40-minute-long tests on live rabbits were performed under general anaesthetic and showed that the display worked well - and fluroescence tests showed no damage or abrasions to the rabbit's eyes after the lenses were removed. While making a higher resolution display is next on their agenda, there are uses for this small one, A display with a single controllable pixel could be used in gaming, training, or giving warnings to the hearing impaired.

More information:

http://www.newscientist.com/blogs/onepercent/2011/11/electronic-contact-lens-displa.html

22 November 2011

Mimicking the Brain in Silicon

For decades, scientists have dreamed of building computer systems that could replicate the human brain’s talent for learning new tasks. MIT researchers have now taken a major step toward that goal by designing a computer chip that mimics how the brain’s neurons adapt in response to new information. This phenomenon, known as plasticity, is believed to underlie many brain functions, including learning and memory. With about 400 transistors, the silicon chip can simulate the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. The researchers anticipate this chip will help neuroscientists learn much more about how the brain works, and could also be used in neural prosthetic devices such as artificial retinas, says Chi-Sang Poon, a principal research scientist in the Harvard-MIT Division of Health Sciences and Technology. There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. A synapse is the gap between two neurons (known as the presynaptic and postsynaptic neurons). The presynaptic neuron releases neurotransmitters, such as glutamate and GABA, which bind to receptors on the postsynaptic cell membrane, activating ion channels.


Opening and closing those channels changes the cell’s electrical potential. If the potential changes dramatically enough, the cell fires an electrical impulse called an action potential. All of this synaptic activity depends on the ion channels, which control the flow of charged atoms such as sodium, potassium and calcium. Those channels are also key to two processes known as long-term potentiation (LTP) and long-term depression (LTD), which strengthen and weaken synapses, respectively. The MIT researchers designed their computer chip so that the transistors could mimic the activity of different ion channels. While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell. The MIT researchers plan to use their chip to build systems to model specific neural functions, such as the visual processing system. Such systems could be much faster than digital computers. Even on high-capacity computer systems, it takes hours or days to simulate a simple brain circuit. With the analog chip system, the simulation is even faster than the biological system itself. Another potential application is building chips that can interface with biological systems. This could be useful in enabling communication between neural prosthetic devices such as artificial retinas and the brain.

More information:

http://web.mit.edu/newsoffice/2011/brain-chip-1115.html

19 November 2011

Archeovirtual 2011 Seminar

Yesterday, I gave an invited talk to the ‘V-Must Workshop 2: Virtual Heritage, Games and Movie’, Archeovirtual 2011, held at Paestum, Italy. The title of my talk was ‘Serious Games’.


My talk focused on the main technologies used for serious games. In addition, I presented two projects which are currently running at iWARG including crowd modeling and procedural modeling.

More information:

http://www.vhlab.itabc.cnr.it/archeovirtual/workshop.htm

13 November 2011

Tracking Multiple Athletes

EPFL’s Computer Vision Laboratory (CVLab), now has a new tool that makes it possible to follow multiple players at once on a field or court, even when they’re buried under a pile of bodies in a rugby match or crouched behind another player. The athletes are represented on a screen with a superimposed image bearing their jersey color and number, so spectators, referees, and coaches can easily follow individuals without mixing them up. And there’s no need for the players to wear extra gear or RFID chips. The system is made up of eight standard cameras - two on each side of the field or court, two that film from above and two that zoom – and three algorithms. After a tackle, goal, basket, or pileup, the system re-attributes the jersey number to each player automatically. No more getting lost in the crowd.


Three algorithms make the system work. The first detects individuals at a specific moment in time, independently of where they were the moment before or after. To do this, it slices the playing area into small 25 cm2 squares, removes the background in all the images simultaneously, and from this deduces the probability of the presence of a player in each of the small squares. The other two algorithms connect the results obtained for each moment in order to establish individual trajectories. All three use global optimization methods, resulting in a very robust system capable of tracking people in real time in a reliable manner. Researchers work with other applications, like tracking pedestrians to monitor traffic in an area, or following the movement of clients in a store for marketing purposes.

More information:

http://actu.epfl.ch/news/new-technology-tracks-multiple-athletes-at-once/

10 November 2011

Robots and Avatars Seminar

Yesterday, I gave an invited talk to the ‘Robots and Avatars’ workshop, held at Coventry University, Faculty of Engineering and Computing, Department of Computing. The title of my talk was ‘Human-machine interfaces’.


My talk focused on human-machine interfaces for control and communication between humans and machines, focussing on the use of Brain Computer Interfaces (BCI). In particular, I focused on the NeuroSky and Emotiv interfaces for robot control.

More information:

http://iwarg.blogspot.com/2011/11/robots-and-avatars.html

06 November 2011

Controlling An Avatar With Your Brain

In the 2011 movie ‘Source Code’, US Army Captain Colter Stevens has to stop a dangerous terrorist from detonating a bomb on a train. But because he is paralyzed in real-life, Stevens is being sent on the mission through an avatar he guides with his mind. Sounds far-fetched? Too Sci-Fi? One Israeli Professor is taking technology along those lines – further than you could ever imagine, with his latest project: Controlling your very own clone-avatar.


Researchers at the Advanced Virtuality Lab (AVL) at the Interdisciplinary Center (IDC) in Herzliya, Israel, have been studying and experimenting with the next generation of human-computer interfaces and their impact on individuals and society for the last three years, along with an international team of experts. The AVL’s main activity is to build virtual worlds and the interfaces that will be used in the future and investigates human behavior and the human mind in a virtual reality setting.

More information:

http://nocamels.com/2011/10/controlling-an-avatar-with-your-brain-israeli-lab-is-trying/

04 November 2011

A Versatile Touch Sensor

We live in an increasingly touchy-feely tech world, with various ways for smart phones and tablet computers to sense our finger taps and gestures. Now a new type of touch technology, developed by researchers at the University of Munich and the Hasso Plattner Institute, could lead to touch sensitivity being added to everyday items such as clothing, headphone wires, coffee tables, and even pieces of paper. The new touch technology relies on something called Time Domain Reflectometry (TDR), which has been used for decades to find damage in underwater cables. TDR is simple in theory: send a short electrical pulse down a cable and wait until a reflection of the pulse comes back. Based on the known speed of the pulse and the time it takes to come back, software can determine the position of the problem—damage in the line or some sort of change in electrical conductance.


The TDR implementation is straightforward, For one demonstration, researchers taped two parallel strips of copper to a piece of paper. Metal clips connect the copper strips to a pulse generator and detector. Pico-second-long electrical pulses are sent out, and if there's any change in capacitance between the two strips of copper—produced by a finger close to or touching the wires, for instance—part of the pulse is reflected back. An oscilloscope shows the changing waveform produced by the reflected pulse, and software on a connected computer analyzes the waveform to determine the position of the touch. To make a surface touch-sensitive requires only two wires (or metal traces of conductive ink), which can be configured in various patterns to get the necessary coverage. In contrast, a capacitive touch screen like the one in the iPhone uses a matrix of wires coming out of two sides of the screen.

More information:

http://www.technologyreview.com/computing/39036/

31 October 2011

Computer Inspired Creativity

Constraints on creativity imposed by computer-aided design (CAD) tools are being overcome, thanks to a novel system that incorporates eye-tracking technology. 'Designing with Vision', a system devised by researchers at The Open University and the University of Leeds, is breaking down rigid distinctions between human and machine. This should help designers to recover intuitive elements of the design process that are otherwise suppressed when working with CAD. Traditional design tools, such as pen and paper, are increasingly being replaced by 2D and 3D computerised drawing packages. The uptake of CAD is helping to increase productivity and improve the quality of designs, reducing errors and unnecessary wastage when the goods are made. However, the switch to CAD may have a downside too. The introduction of digital technologies often forces people to change how they work so they fit with the technology, rather than the other way around. In creative disciplines, this inevitably constrains the results produced - a scenario that would be a disaster for designers, according to researchers at The Open University.


Researchers focused on an early stage in the design process that involves drawing, viewing, selecting and manipulating shapes. This process is common to designers working in areas such as fashion, graphics and consumer goods packaging. Designers who work with shapes tend to intuitively home in on certain areas in initial sketches, using these as a starting point to move forward. However, this element of subconscious selection is difficult to replicate with CAD, because the software package is unable to 'see' what might be catching the designer's eye. To redress this, researchers added eye-tracking technology to a CAD system, giving the digital technology a more fluid human-machine interface. This produced a design system that could identify and select shapes of interest automatically within a drawn sketch, according to the designer's gaze. The system was put through its paces by groups of professional and student designers to check that it worked in practice. The tests confirmed that the combination of eye-tracking technology and conventional mouse-based input allowed initial design sketches to be manipulated and developed according to the user's subconscious visual cues.

More information:

http://www.leeds.ac.uk/news/article/2558/the_eyes_have_it_computer-inspired_creativity

18 October 2011

Visualizing the Future

It appears that we really can be in two places at once. We call these ubiquitous displays, researchers from California Institute for Telecommunications & Information Technology (Calit2) said. As the term implies, ubiquitous displays may soon be used just about everywhere, from huge domes to small cell phones, from amusement parks to doctors’ exam rooms.


While amusement parks, flight training operations and others have long created virtual reality environments, the UCI group’s software will be compatible with new digital equipment and allows the use of everyday cameras and far cheaper projectors. Perhaps most important, the calibration process between the camera and the projectors – key to image quality – is completely automated.

More information:

http://www.uci.edu/features/2011/10/feature_panorama_111010.php

17 October 2011

Robot Biologist

Now computers are at it again, but this time they are trying to automate the scientific process itself. An interdisciplinary team of scientists at Vanderbilt University, Cornell University and CFD Research Corporation, Inc., has taken a major step toward this goal by demonstrating that a computer can analyze raw experimental data from a biological system and derive the basic mathematical equations that describe the way the system operates. According to the researchers, it is one of the most complex scientific modeling problems that a computer has solved completely from scratch. The biological system that the researchers used to test ABE is glycolysis, the primary process that produces energy in a living cell.


Specifically, they focused on the manner in which yeast cells control fluctuations in the chemical compounds produced by the process. The researchers chose this specific system, called glycolytic oscillations, to perform a virtual test of the software because it is one of the most extensively studied biological control systems. They used one of the process’ detailed mathematical models to generate a data set corresponding to the measurements a scientist would make under various conditions. To increase the realism of the test, the researchers salted the data with a 10 percent random error. When they fed the data into Eureqa, it derived a series of equations that were nearly identical to the known equations.

More information:

http://news.vanderbilt.edu/2011/10/robot-biologist/

16 October 2011

Kinect Merges Real and Virtual Worlds

Microsoft's Kinect Xbox controller, which lets gamers control on-screen action with their body movements, has been adapted in hundreds of interesting, useful, and occasionally bizarre ways since its release in November 2010. It's been used for robotic vision and automated home lighting. It's helped wheelchair users with their shopping. Yet these uses could look like child's play compared to the new 3D modeling capabilities Microsoft has developed for the Kinect. KinectFusion, a research project that lets users generate high-quality 3D models in real time using a standard $100 Kinect.


KinectFusion also includes a realistic physics engine that allows scanned objects to be manipulated in realistic ways. The technology allows objects, people, and entire rooms to be scanned in 3D at a fraction of the normal cost. Imagine true-to-life avatars and objects being imported into virtual environments. Or a crime scene that can be re-created within seconds. Visualizing a new sofa in your living room and other virtual interior design tricks could become remarkably simple. 3D scanners already exist, but none of them approach KinectFusion in ease of use and speed, and even desktop versions cost around $3,000.

More information:

http://www.technologyreview.com/computing/38731/

http://research.microsoft.com/en-us/projects/surfacerecon/

14 October 2011

Games May Not Boost Cognition

Over the past decade, many studies and news media reports have suggested that action video games such as Medal of Honor or Unreal Tournament improve a variety of perceptual and cognitive abilities. But in a paper published this week in the journal Frontiers in Psychology, Walter Boot, an assistant professor in Florida State University's Department of Psychology, critically re-evaluates those claims. Researchers believe that it is a persuasive argument that much of the work done over the past decade demonstrating the benefits of video game play is fundamentally flawed. Many of those studies compared the cognitive skills of frequent gamers to non-gamers and found gamers to be superior.


However, new research points out that this doesn't necessarily mean that their game experience caused better perceptual and cognitive abilities. It could be that individuals who have the abilities required to be successful gamers are simply drawn to gaming. Researchers looking for cognitive differences between expert and novice gamers often recruit research participants by circulating ads on college campuses seeking "expert" video game players. Media reports on the superior skills of gamers heighten gamers' awareness of these expectations. Even studies in which non-gamers are trained to play action video games have their own problems, often in the form of weak control groups.

More information:

http://www.sciencedaily.com/releases/2011/09/110915131637.htm

12 October 2011

It's All About the Hair

Researchers from the Jacobs School of Engineering at UC San Diego got to rub shoulders with Hollywood celebrities. They have developed a new way to light and animate characters’ hair. It is now part of Disney’s production pipeline and will be used in the company’s upcoming movies.


Researchers surveyed the research available to improve the appearance of animated hair. The new software researchers developed allowed artists to control the sheen, color and highlights in their hair. They used a technique called light scattering, and blondes have a lot more of it than brunettes.

More information:

http://www.jacobsschool.ucsd.edu/news/news_releases/release.sfe?id=1122

11 October 2011

Robot Revolution?

From performing household chores, to entertaining and educating our children, to looking after the elderly, roboticists say we will soon be welcoming their creations into our homes and workplaces. Researchers believe we are on the cusp of a robot revolution that will mirror the explosive growth of the computer revolution from the 1980s onwards. They are developing new laws for robot behaviour, and designing new ways for humans and robots to interact.


Commercially available robots are already beginning to perform everyday tasks like vacuuming our floors. The latest prototypes from Japan are able to help the elderly to get out of bed or get up after a fall. They can also remind them when to take medication, or even help wash their hair. Researchers found that people react well to a robot gym instructor, and seem to get less frustrated with it than with instructions given on a computer screen. The robot can act as a perfect trainer, with infinite patience.

More information:

http://www.bbc.co.uk/news/technology-15146053

10 October 2011

Mind-Reading Car

One of the world's largest motor manufacturers is working with scientists based in Switzerland to design a car that can read its driver's mind and predict his or her next move. The collaboration, between Nissan and the École Polytechnique Fédérale de Lausanne (EPFL), is intended to balance the necessities of road safety with demands for personal transport. Scientists at the EPFL have already developed brain-machine interface (BMI) systems that allow wheelchair users to manoeuvre their chairs by thought transference. Their next step will be finding a way to incorporate that technology into the way motorists interact with their cars.


If the endeavour proves successful, the vehicles of the future may be able to prepare themselves for a left or right turn by gauging that their drivers are thinking about making such a turn. However, although BMI technology is well established, the levels of human concentration needed to make it work are extremely high, so the research team is working on systems that will use statistical analysis to predict a driver's next move and to evaluate a driver's cognitive state relevant to the driving environment. By measuring brain activity, monitoring patterns of eye movement and scanning the environment around the car, the team thinks the car will be able to predict what a driver is planning to do and help him or her complete the manoeuvre safely.

More information:

http://www.guardian.co.uk/technology/2011/sep/28/nissan-car-reads-drivers-mind?newsfeed=true

30 September 2011

Virtual Monkeys Write Shakespeare

A few million virtual monkeys are close to re-creating the complete works of Shakespeare by randomly mashing keys on virtual typewriters. A running total of how well they are doing shows that the re-creation is 99.990% complete. The first single work to be completed was the poem A Lover's Complaint. It is also a practical test of the thought experiment that wonders whether an infinite number of monkeys pounding on an infinite number of typewriters would be able to produce Shakespeare's works by accident. The virtual monkeys are small computer programs uploaded to Amazon servers.


These coded apes regularly pump out random sequences of text. Each sequence is nine characters long and each is checked to see if that string of characters appears anywhere in the works of Shakespeare. If not, it is discarded. If it does match then progress has been made towards re-creating the works of the Bard. To get a sense of the scale of the project, there are about 5.5 trillion different combinations of any nine characters from the English alphabet. The monkeys are generating random nine-character strings to try to produce all these strings and thereby find those that appear in Shakespeare's works.

More information:

http://www.bbc.co.uk/news/technology-15060310

26 September 2011

The Cyborg in Us All

Within the next decade there is likely to emerge a new kind of brain implant for healthy people who want to interact with and control machines by thought. One technology under development is the electrocorticographic (ECoG) implant, which is less invasive than other devices and capable of riding on top of the brain-blood barrier, sensing the activity of neuron populations and transmitting their communications to the outside world as software commands. Research to study the potential of ECoG implants is being funded by the U.S. Defense Department as part of a $6.3 million Army project to create devices for telepathic communication.


Carnegie Mellon University researchers are most eager to see a ‘two-way direct-brain interface’ that would revolutionize human experience. They took advantage of the implant to see if patients could control the actions in a video game called Galaga using only their thoughts. Patients flick the spaceship back and forth by imagining that they are moving their tongue. This creates a pulse in his brain that travels through the wires into a computer. Thus, a thought becomes a software command. An even less invasive brain-machine interface than the ECoG implant is being researched at Dartmouth College, where scientists are creating an iPhone linked to an electroencephalography headset.

More information:

http://www.nytimes.com/2011/09/18/magazine/the-cyborg-in-us-all.html?_r=1

24 September 2011

RePro3D

Lonely gamers who have felt the pain of being separated by a screen from their favorite personalities now have a way to reach out and touch their game characters, and that new way is RePro3D. A group of researchers from Keio University in Japan have come up with a 3D screen that lets the user, glasses-free, see and touch characters on the screen. The technology is about a 3D parallax display with infrared camera that recognizes the movements of the user's hand and the character on the screen reacts to the movements instantly.


Researchers use retro-reflective projection technology, using materials with special retro-reflective characteristics. This kind of material reflects light that enters back at the same angle it entered. Using this technology enables a display to show images at a different place from the light source. A user's tactile device worn on the fingers is designed to enhance the sensation of touching the objects on a 3D screen. In the future, they plan to build a touchable 3D display system that expands the size of the visible image, so that multiple people can be in the same space, and can share the same image.

More information:

http://www.physorg.com/news/2011-09-lonely-gamers-repro3d-characters-video.html

23 September 2011

AR Gesture Recognition

To make its business software more effective, HP recently paid $10 billion for Autonomy, a U.K. software company that specializes in machine learning. But it turns out that Autonomy has developed image-processing techniques for gesture-recognizing augmented reality (AR). AR involves layering computer-generated imagery on top of a view of the real world as seen through the camera of a smart phone or tablet computer. So someone looking at a city scene through a device could see tourist information on top of the view. Autonomy's new AR technology, called Aurasma, recognizes a user's hand gestures. This means a person using the app can reach out in front of the device to interact with the virtual content. Previously, interacting with AR content involved tapping the screen. One demonstration released by Autonomy creates a virtual air hockey game on top of an empty tabletop—users play by waving their hands.


Autonomy's core technology lets businesses index and search data that conventional, text-based search engines struggle with. Examples are audio recordings of sales calls, or video from surveillance cameras. Aurasma's closest competitor is Layar, a Netherlands company that offers an AR platform that others can add content to. However, Layar has so far largely relied on GPS location to position content, and only recently made it possible to position virtual objects more precisely, using image recognition. And Layar does not recognize users' gestures. Although mobile phones and tablets are the best interfaces available for AR today, the experience is still somewhat clunky, since a person must hold up a device with one hand at all times. Sci-fi writers and technologists have long forecast that the technology would eventually be delivered through glasses. Recognizing hand movements would be useful for such a design, since there wouldn't be the option of using a touch screen or physical buttons.

More information:

http://www.technologyreview.com/communications/38568/

19 September 2011

Caring, Empathetic Robots

Robots may one day learn to care for and nurture one another, according to research by an OU professor. Computer science scientists in the OU Robotic Intelligence and Machine Learning Lab, investigate whether robots can learn to care for one another and, eventually, humans.


Researchers realized most organisms are born with instincts that tell them how to survive, but if an organism is in a rapidly changing environment, these skills may not be applicable, and it will have to learn new skills. From there, the idea of having a nurturer seems most logical.

More information:

http://www.oudaily.com/news/2011/sep/14/ou-professor-conducts-research-robot-care-nurturin/

18 September 2011

3D 'Daddy Long Legs'

Two ancient types of harvestmen, or 'daddy long legs,' which skittered around forests more than 300 million years ago, are revealed in new three-dimensional virtual fossil models published in the journal Nature Communications. An international team, led by researchers from Imperial College London, have created 3D models of two fossilised species of harvestmen, from the Dyspnoi and Eupnoi suborders. The ancient creatures lived on Earth before the dinosaurs, in the Carboniferous period. The 3D models are providing fresh insights into how these ancient eight-legged creatures, whose 1cm bodies were the size of small buttons, survived in Earth's ancient forests and how harvestmen as a group have evolved. Other scientists have previously suggested that harvestmen were among the first groups on land whose bodies evolved into their modern-day form at a time while other land animals such as spiders and scorpions were still at an early stage in their evolution. The researchers say comparing the 3D fossils of the Dyspnoi and Eupnoi species to modern members of these harvestmen groups provides further proof that ancient and modern species were very similar in appearance, suggesting little change over millions of years. The 3D virtual fossil models have also provided the researchers with further proof that the Dyspnoi and Eupnoi lineages had evolved from a common harvestmen ancestor around 305 million years ago. The researchers say their work supports earlier DNA-based studies and is important because it provides a clearer picture of the early evolution of these creatures.


The researchers also found clues as to how both creatures may have lived hundreds of millions of years ago. The team believes that the Eupnoi probably lived in foliage just above the forest floor, which may have helped it to hide from predators lurking on the ground. The 3D model of the Eupnoi revealed that it had long legs with a curvature at the end that are similar to the legs of its modern relatives who use the curved leg parts to grip onto vegetation while moving from leaf to leaf. The researchers also determined that the Eupnoi's body had a very thin and soft outer shell or exoskeleton by analysing a section of the 3D fossil showing a part of its abdomen that had been crushed during the fossilisation process. This indicated to the team the fragility of the Eupnoi's exoskeleton. It is rare to find fossilised remains of Harvestmen because their soft, tiny, fragile bodies are difficult to preserve during the fossilisation process. Only around 33 fossilised species have been discovered so far. Currently, most palaeontologists analyse fossils by splitting open a rock and looking at the creatures encased inside. This means that they can often only see part of a three-dimensional fossil and cannot explore all of the fossil's features. The method used in today's study is called 'computed tomography' and it enables researchers to produce highly detailed virtual models using a CT scanning device, based at the Natural History Museum in London. In this study, scientists took 3142 x-rays of the fossils and compiled the images into accurate 3D models, using specially designed computer software. This research follows on from previous modelling studies carried out by Imperial researchers on other prehistoric creatures including ancient spiders called Anthracomartus hindi and Eophrynus prestivicii, and an early ancestor of the cockroach called Archimylacris eggintoni.

More information:

http://www.sciencedaily.com/releases/2011/08/110823115149.htm

16 September 2011

Will OnLive Kill the Console?

OnLive is a fairly simple idea. Instead of using a console or a computer to run a game for you, the system uses a server over the internet. It's the implications of that idea that, if they work, are nothing short of revolutionary. Your controller or keyboard sends your input over the internet, to an OnLive server which then bounces back to you the result of your action onscreen. There's no physical disc, and not even any download time - you can start a 30-minute game demo in seconds, for free. Or rent or buy games that are linked to your account (UK pricing hasn't been announced yet, US pricing is typically around $5 for a three-day rental, $50 for a new game). And that means you can take them anywhere, play them on anything.


The same game, with progress tracked, can be played on a PC, Mac, big-screen TV with a ‘micro-console’ and controller, Android tablet or iPad (from this autumn) and even, in the future, on an internet-enabled TV or Blu-ray player. So you can start a game at work in your lunchtime, continue it on a tablet on wi-fi on the way home and finish it on your big TV. For games companies, that means no piracy, and no physical distribution hassles. For gamers, as well as portability and instant availability, it also means you can watch anyone else's game (even talk to them while they play), from a megalomania-inducing bank of screens of games happening right that second.

More information:

http://games.uk.msn.com/previews/will-onlive-kill-the-console-14092011

05 September 2011

Robot Teaches English

Say ‘How do you do’ to Mike and Michelle, face-to-face tutors for English learners. They'll correct your grammar, answer questions, converse on a variety of topics, be there 24/7, and won't charge a dime. And they're doing very well, thank-you. The on-screen ‘English Tutor’ interactive robots and their creator (from Pasadena City College), are heading to England's Exeter University in October as one of four finalists in the 2011 Loebner Prize for Artificial Intelligence.

Over the years, the program has grown more sophisticated, now with robots able to chat on 25 topics in 2,000 available conversations. The robots can detect the 800 most common errors learning English-speakers make, Lee said, and know all the irregular verbs, provide different tenses, explain grammatical terms and give advice on how to learn English. Users still have to type in their questions, rather than speak, although he said users with speech recognition software can talk into the microphone.

More information:

http://www.pasadenastarnews.com/news/ci_18767575

28 August 2011

Build Music With Blocks

Researchers at the University of Southampton have developed a new way to generate music and control computers. Audio d-touch, is based into tangible user interfaces, or TUIs, gives physical control in the immaterial world of computers. It uses a standard computer and a web cam. Through using simple computer vision techniques, physical blocks are tracked on a printed board. The position of the blocks then determines how the computer samples and reproduces sound.


Audio d-touch is more than just for play: TUIs are an alternative to virtual worlds. Human-Computer Interaction researchers are investigating ways to move away from the online, purely digital world and rediscover the richness of our sense of touch. All that is needed is a regular computer equipped with a web-cam and a printer. The user creates physical interactive objects and attaches printed visual markers recognized by Audio d-touch. The software platform is open and can be extended for applications beyond music synthesis.

More information:

http://www.soton.ac.uk/mediacentre/news/2011/aug/11_83.shtml

27 August 2011

Virtual Touch Feels Tumours

Tactile feedback technology could give keyhole surgeons a virtual sense of feeling tumours while operating. A Leeds University study has combined computer virtualisation with a device that simulates pressure on a surgeon's hand when touching human tissue remotely. This could enable a medic to handle a tumour robotically, and judge if it is malignant or benign. Cancer specialists hope the new system will help to improve future treatment. In current keyhole procedures, a surgeon operates through a tiny incision in the patient's body, guided only by video images. Using keyhole techniques, as opposed to major invasive surgery, helps improve healing and patient recovery. However, surgeons can't feel the tissue they are operating on - something which might help them to find and categorise tumours.

The team of undergraduates at Leeds University has devised a solution that combines a computer-generated virtual simulation with a hand-held haptic feedback device. The system works by varying feedback pressure on the user's hand when the density of the tissue being examined changes. In tests, team members simulated tumours in a human liver using a soft block of silicon embedded with ball bearings. The user was able to locate these lumps using haptic feedback. Engineers hope this will one day allow a surgeon to feel for lumps in tissue during surgery. The project has just been declared one of four top student designs in a global competition run by US technology firm National Instruments.

More information:

http://www.bbc.co.uk/news/technology-14540581

22 August 2011

Zoobotics

Until recently, most robots could be thought of as belonging to one of two phyla. The Widgetophora, equipped with claws, grabs and wheels, stuck to the essentials and did not try too hard to look like anything other than machines. The Anthropoidea, by contrast, did their best to look like their creators—sporting arms with proper hands, legs with real feet, and faces. The few animal-like robots that fell between these extremes were usually built to resemble pets and were, in truth, not much more than just amusing toys. They are toys no longer, though, for it has belatedly dawned on robot engineers that they are missing a trick. The great natural designer, evolution, has come up with solutions to problems that neither the Widgetophora nor the Anthropoidea can manage. Why not copy these proven models, the engineers wondered, rather than trying to outguess 4 billion years of natural selection? The result has been a flourishing of animal-like robots. It is not just dogs that engineers are copying now, but shrews complete with whiskers, swimming lampreys, grasping octopuses, climbing lizards and burrowing clams.


They are even trying to mimic insects, by making robots that take off when they flap their wings. As a consequence, the Widgetophora and the Anthropoidea are being pushed aside. The phylum Zoomorpha is on the march. Researchers at the Sant’Anna School of Advanced Studies in Pisa are a good example of this trend. They lead an international consortium that is building a robotic octopus. The hug of a monopus.To create their artificial cephalopod they started with the animal’s literal and metaphorical killer app: its flexible, pliable arms. In a vertebrate’s arms, muscles do the moving and bones carry the weight. An octopus arm, though, has no bones, so its muscles must do both jobs. Its advantage is that, besides grasping things tightly, it can also squeeze into nooks and crannies that are inaccessible to vertebrate arms of similar dimensions. After studying how octopus arms work, researchers have come up with an artificial version that behaves the same way. Its outer casing is made of silicone and is fitted with pressure sensors so that it knows what it is touching. Inside this casing are cables and springs made of a specially elastic nickel-titanium alloy. The result can wrap itself around an object with a movement that strikingly resembles that of the original.

More information:

http://www.economist.com/node/18925855

21 August 2011

Chips That Behave Like Brains

Computers, like humans, can learn. But when Google tries to fill in your search box based only on a few keystrokes, or your iPhone predicts words as you type a text message, it's only a narrow mimicry of what the human brain is capable. The challenge in training a computer to behave like a human brain is technological and physiological, testing the limits of computer and brain science. But researchers from IBM Corp. say they've made a key step toward combining the two worlds. The company announced Thursday that it has built two prototype chips that it says process data more like how humans digest information than the chips that now power PCs and supercomputers.


The chips represent a significant milestone in a six-year-long project that has involved 100 researchers and some $41 million in funding from the government's Defense Advanced Research Projects Agency, or DARPA. IBM has also committed an undisclosed amount of money. The prototypes offer further evidence of the growing importance of "parallel processing," or computers doing multiple tasks simultaneously. That is important for rendering graphics and crunching large amounts of data. The uses of the IBM chips so far are prosaic, such as steering a simulated car through a maze, or playing Pong. It may be a decade or longer before the chips make their way out of the lab and into actual products.

More information:

http://www.physorg.com/news/2011-08-ibm-pursues-chips-brains.html

17 August 2011

Virtual People Get ID Checks

Using both characteristics, researchers hope to develop techniques for checking whether the digital characters are who they claim to be. Such information could be used in situations where login details are not visible or for law enforcement. Impersonation of avatars is expected to become a growing problem as real life and cyberspace increasingly merge. Avatars are typically used to represent players in online games such as World of Warcraft and in virtual communities like Second Life. As their numbers grow, it will become important to find ways to identify those we meet regularly, according to researchers from the University of Louisville. Working out if their controller is male or female has an obvious commercial benefit.


But discovering that the same person controlled different avatars in separate spaces would be even more useful. As robots proliferate we will need ways of telling one from the other. The technology may also have implications for security if a game account is hacked and stolen. Behavioural analysis could help prove whether an avatar is under the control of its usual owner by watching to see if it acts out of character. The research looked at monitoring for signature gestures, movements and other distinguishing characteristics. Researchers discovered that the lack of possible variations on a avatar's digital face, when compared to a real human, made identification tricky. However, those limited options are relatively simple to measure, because of the straightforward geometries involved in computer-generated images.

More information:

http://www.bbc.co.uk/news/technology-14277728

16 August 2011

Computers Synthesize Sounds

Computer-generated imagery usually relies on recorded sound to complete the illusion. Recordings can, however, limit the range of sounds you can produce, especially in future virtual reality environments where you can't always know ahead of time what the action will be. Researchers developed computer algorithms to synthesize sound on-the-fly based on simulated physics models. Now they have devised methods for synthesizing more realistic sounds of hard objects colliding and the roar of fire. To synthesize collision sounds, the computer calculates the forces computer-generated objects would exert if they were real, how those forces would make the objects vibrate and how those vibrations transfer to the air to make sound. Previous efforts often assumed that the contacting objects were rigid, but in reality, there is no such thing as a rigid object, researchers say. Objects vibrate when they collide, which can produce further chattering and squeaking sounds.


Resolving all the frictional contact events between rapidly vibrating objects is computationally expensive. To speed things up, their algorithm simulates only the fraction of contacts and vibrations needed to synthesize the sound. Demonstrations include the sound of a ruler overhanging the edge of table and buzzing when plucked, pounding on a table to make dishes clatter and ring and the varied sounds of a Rube Goldberg machine that rolls marbles into a cup that moves a lever that pushes a bunny into a shopping cart that rolls downhill. Fire is animated by mimicking the chemical reactions and fluid-like flow of burning gases. But flame sounds come from things that happen very rapidly in the expanding gases, and computer animators do not need to model those costly details to get good-looking flames. They demonstrated with a fire-breathing dragon statue, a candle in the wind, a torch swinging through the air, a jet of flame injected into a small chamber and a burning brick. The last simulation was run with several variations of the sound-synthesis method, and the results compared with a high-speed video and sound recording of a real burning brick.

More information:

http://www.news.cornell.edu/stories/Aug11/FireContactSound.html

06 August 2011

Robots With Ability to Learn

Researchers with the Hasegawa Group at the Tokyo Institute of Technology have created a robot that is capable of applying learned concepts to perform new tasks. Using a type of self-replicating neural technology they call the Self-Organizing Incremental Neural Network (SOINN), the team has released a video demonstrating the robot’s ability to understand it’s environment and to carry out instructions that it previously didn’t know how to do. The robot, apparently not named because it’s not the robot itself that is being demonstrated, but the neural technology behind what it’s able to achieve, is capable of figuring out what to do next in a given situation by storing information in a network constructed to mimic the human brain. For example, the team demonstrates its technology by asking the robot to fill a cup with water from a bottle, which it does quite quickly and ably. This part is nothing new, the robot is simply following predefined instructions. On the next go round however, the robot is asked to cool the beverage while in the middle of carrying out the same instructions as before. This time, the robot has to pause to consider what it must do to carry out this new request. It immediately sees that it cannot carry out the new request under the current circumstances because both of its hands are already being used (one to hold the cup, the other the bottle) so, it sets the bottle down then reaches over to retrieve an ice cube which it promptly deposits in the cup.


This little demonstration, while not all that exciting to watch, represents a true leap forward in robotics technology and programming. Being able to learn means that the robot can be programmed with just a very basic set of pre-knowledge that is then built upon for as long as the robot exists, without additional programming; not unlike how human beings start out with very little information at birth and build upon what they know and are able to do over a lifetime. The robot has an advantage though, because not only is it able to learn from its own experiences, but from others as well all over the world. This is because it can be connected to the internet where it can research how to do things, just as we humans already do. But, in addition to that it could conceivably also learn from other robots just like it that have already learned how to do the thing that needs doing. As an example, one of the research team members describes a situation where a robot is given to an elderly man as a nurse and is then asked to make him some tea. If the robot doesn’t know how, it could just ask another robot online who does. Remarkably, the first robot could do so even if he (it) is trying to make English tea and the robot who answers the internet query has made only Japanese tea before. The lessons the first robot has learned over time would allow him to adapt, and that’s why this breakthrough is so important, because it means given enough time and experience, robots may soon finally be able to do all those things we’ve been watching them do in science fiction movies, and likely, more.

More information:

http://www.physorg.com/news/2011-08-robot-ability.html

01 August 2011

Turning Thought into Motion

Brain cap technology being developed at the University of Maryland allows users to turn their thoughts into motion. Researchers have created a non-invasive, sensor-lined cap with neural interface software that soon could be used to control computers, robotic prosthetic limbs, motorized wheelchairs and even digital avatars. The potential and rapid progression of the UMD brain cap technology can be seen in a host of recent developments, including a just published study in the Journal of Neurophysiology, new grants from the National Science Foundation (NSF) and National Institutes of Health, and a growing list of partners that includes the University of Maryland School of Medicine, the Veterans Affairs Maryland Health Care System, the Johns Hopkins University Applied Physics Laboratory, Rice University and Walter Reed Army Medical Center's Integrated Department of Orthopaedics & Rehabilitation.


Researchers use EEG to non-invasively read brain waves and translate them into movement commands for computers and other devices. They are also collaborating on a rapidly growing cadre projects with researchers at other institutions to develop thought-controlled robotic prosthetics that can assist victims of injury and stroke. They have tracked the neural activity of people on a treadmill doing precise tasks like stepping over dotted lines. The researchers are matching specific brain activity recorded in real time with exact lower-limb movements. This data could help stroke victims in several ways. People who are less mobile commonly suffer from other health issues such as obesity, diabetes or cardiovascular problems, so they are moving by whatever means possible. The second use of the EEG data in stroke victims offers exciting possibilities by decoding the motion of a normal gait.

More information:

http://www.newsdesk.umd.edu/scitech/release.cfm?ArticleID=2475

25 July 2011

Humanlike Computer Vision

Two new techniques for computer-vision technology mimic how humans perceive three-dimensional shapes by instantly recognizing objects no matter how they are twisted or bent, an advance that could help machines see more like people. The techniques, called heat mapping and heat distribution, apply mathematical methods to enable machines to perceive three-dimensional objects, researchers mentioned at Purdue. Both of the techniques build on the basic physics and mathematical equations related to how heat diffuses over surfaces. As heat diffuses over a surface it follows and captures the precise contours of a shape. The system takes advantage of this intelligence of heat, simulating heat flowing from one point to another and in the process characterizing the shape of an object. A major limitation of existing methods is that they require prior information about a shape in order for it to be analyzed. Researchers developing a new machine-vision technique tested their method on certain complex shapes, including the human form or a centaur – a mythical half-human, half-horse creature. The heat mapping allows a computer to recognize the objects no matter how the figures are bent or twisted and is able to ignore noise introduced by imperfect laser scanning or other erroneous data.


The new methods mimic the human ability to properly perceive objects because they don't require a preconceived idea of how many segments exist. The methods have many potential applications, including a 3D search engine to find mechanical parts such as automotive components in a database; robot vision and navigation; 3D medical imaging; military drones; multimedia gaming; creating and manipulating animated characters in film production; helping 3D cameras to understand human gestures for interactive games; contributing to progress of areas in science and engineering related to pattern recognition; machine learning; and computer vision. The heat-mapping method works by first breaking an object into a mesh of triangles, the simplest shape that can characterize surfaces, and then calculating the flow of heat over the meshed object. The method does not involve actually tracking heat; it simulates the flow of heat using well-established mathematical principles. Heat mapping allows a computer to recognize an object, such as a hand or a nose, no matter how the fingers are bent or the nose is deformed and is able to ignore noise introduced by imperfect laser scanning or other erroneous data.

More information:

http://www.purdue.edu/newsroom/research/2011/110620RamaniHeat.html