30 November 2010

Sensors Monitor Elderly at Home

The sensors know when elderly users wake up to go to the bathroom. They know how much time he spends in bed. They watch him do jigsaw puzzles in the den. They tattle when he opens the refrigerator. Sensor networks, which made their debut in hospitals and assisted living centers, have been creeping into the homes of some older Americans in recent years. The systems -- which can monitor a host of things, from motion in particular rooms to whether a person has taken his or her medicine -- collect information about a person's daily habits and condition, and then relay that in real-time to doctors or family members. If the elderly opens an exterior door at night, for example, an alert goes out to the doctor, a monitoring company and two of his closest friends, since there is no family nearby.

The monitoring network, made by a company called GrandCare Systems, features motion-sensors in every room as well as sensors on every exterior door. A sensor beneath the mattress pad on his bed tells health care professionals if he's sleeping regularly. All of this connects wirelessly with vital sign monitors, which send his doctor daily reports about his blood-sugar levels, blood pressure and weight. He can see charts about how he's doing on a touch-screen monitor that sits on a desk in his home office. University researchers are testing robots that help take care of older people, keep them company -- and even give them sponge baths. Meanwhile, some younger people have taken to collecting information on their own, often going to extremes to document exercise routines, caffeine intake and the like and posting the data online.

More information:

http://www.cnn.com/2010/TECH/innovation/11/19/sensors.aging/

28 November 2010

When the Playroom is the Computer

For all the work that’s gone into developing educational media, even the most stimulating TV shows and video games leave kids stationary. Researchers at the MIT Media Laboratory are hoping to change that with a system called Playtime Computing, which gives new meaning to the term ‘computing environment’. The prototype of the Playtime Computing system consists mainly of three door-high panels with projectors behind them; a set of ceiling-mounted projectors that cast images onto the floor; and a cube-shaped, remote-controlled robot, called the Alphabot, with infrared emitters at its corners that are tracked by cameras mounted on the ceiling. But the system is designed to make the distinctions between its technical components disappear. The three panels together offer a window on a virtual world that, courtesy of the overhead projectors, appears to spill into the space in front of it. And most remarkably, when the Alphabot heads toward the screen, it slips into a box, some robotic foliage closes behind it, and it seems to simply continue rolling, at the same speed, up the side of a virtual hill. Among the symbols are letters of the Roman alphabet, Japanese characters, a heart and a pair of musical notes. When children attach the notes to the Alphabot, music begins to play from the system’s speakers, illustrating the principle that symbolic reasoning can cut across sensory modalities.

Another, vital element of the system is what the researchers call the Creation Station, a tabletop computer on which children can arrange existing objects or draw their own pictures. Whatever’s on the tabletop can be displayed by the projectors, giving children direct control over their environment. To make the Playtime Computing system even more interactive, the researchers have outfitted baseball caps with infrared light emitters, so that the same system that tracks the Alphabot could also track playing children. That would make it possible for on-screen characters — or a future, autonomous version of the Alphabot — to engage directly with the children. The researchers are eager, however, to begin experimenting with the new Microsoft Kinect, a gaming system that, unlike the Nintendo Wii, uses cameras rather than sensor-studded controllers to track gamers’ gestures. Kinect could offer an affordable means of tracking motion in the Playtime Computing environment, without requiring kids to wear hats. The prototype of the Alphabot, the researchers say, uses a few hundred dollars’ worth of off-the-shelf parts, and if the robot were mass-produced, its price would obviously fall. The researchers believe that simple, affordable versions of the Playtime Computing system could be designed for home use, while more elaborate versions, with multiple, multifunctional robots, could be used in the classroom or at museums.

More information:

http://web.mit.edu/newsoffice/2010/rolling-robot-1122.html

22 November 2010

Robot That Learns Via Touch

Researchers in Europe have created a robot that uses its body to learn how to think. It is able to learn how to interact with objects by touching them without needing to rely on a massive database of instructions for every object it might encounter. The robot is a product of the Europe-wide PACO-PLUS research project and operates on the principle of “embodied cognition,” which relies on two-way communication between the robot’s sensors in its hands and “eyes” and its processor. Embodied cognition enables AMAR to solve problems that were unforeseen by its programmers, so when faced with a new task it investigates ways of moving or looking at things until the processor makes the required connections.

AMAR has learned to recognize common objects to be found in a kitchen, such as cups of various colors, plates, and boxes of cereal, and it responds to commands to interact with these objects by fetching them or placing them in a dishwasher, for example. One example of the tasks AMAR has learned to carry out is setting a table, and it is able to do this even if a cup is placed in its way. The robot worked out that the cup was in the way, was movable, and would be knocked over if left in the way, and so it moved the cup out of the way before continuing with its task. The type of thinking demonstrated by AMAR mimics the way humans perceive their environment in terms that depend on their ability to interact with it physically.

More information:

http://www.physorg.com/news/2010-11-armar-iii-robot-video.html

19 November 2010

Mouse Brain Visualisation

The most detailed magnetic resonance images ever obtained of a mammalian brain are now available to researchers in a free, online atlas of an ultra-high-resolution mouse brain, thanks to work at the Duke Center for In Vivo Microscopy. In a typical clinical MRI scan, each pixel in the image represents a cube of tissue, called a voxel, which is typically 1x1x3 millimeters. The atlas images, however, are more than 300,000 times higher resolution than an MRI scan, with voxels that are 20 micrometers on a side. The interactive images in the atlas will allow researchers worldwide to evaluate the brain from all angles and assess and share their mouse studies against this reference brain in genetics, toxicology and drug discovery. The brain atlas' detail reaches a resolution of 21 microns. A micron is a millionth of a meter, or 0.00003937 of an inch.

The atlas used three different magnetic resonance microscopy protocols of the intact brain followed by conventional histology to highlight different structures in the reference brain. The brains were scanned using an MR system operating at a magnetic field more than 6 times higher than is routinely used in the clinic. The images were acquired on fixed tissues, with the brain in the cranium to avoid the distortion that occurs when tissues are thinly sliced for conventional histology. The new Waxholm Space brain can be digitally sliced from any plane or angle, so that researchers can precisely visualize any regions in the brain, along any axis without loss of spatial resolution. The team was also able to digitally segment 37 unique brain structures using the three different data acquisition strategies.

More information:

http://mouse.brain-map.org/

http://www.civm.duhs.duke.edu/neuro201001/

http://www.sciencedaily.com/releases/2010/10/101025123906.htm

16 November 2010

3D Maps of Brain Wiring

This now makes it possible to view a total picture of the winding roads and their contacts without having to operate. Doctors can virtually browse along the spaghetti-like ‘wiring’ of the brain, with this new tool. To know accurately where the main nerve bundles in the brain are located is of immense importance for neurosurgeons. As an example he cites ‘deep brain stimulation’, with which vibration seizures in patients with Parkinson’s disease can be suppressed.

With this new tool, it is possible to determine exactly where to place the stimulation electrode in the brain. The guiding map has been improved: because we now see the roads on the map, we know better where to stick the needle. The technique may also yield many new insights into neurological and psychiatric disorders. And it is important for brain surgeons to know in advance where the critical nerve bundles are, to avoid damaging them.

More information:

http://w3.tue.nl/en/news/news_article/?tx_ttnews[tt_news]=10122&tx_ttnews[backPid]=361&cHash=e497383d04

15 November 2010

Taking Movies Beyond Avatar

A new development in virtual cameras at the University of Abertay Dundee is developing the pioneering work of James Cameron’s blockbuster Avatar using a Nintendo Wii-like motion controller – all for less than £100. Avatar, the highest-grossing film of all time, used several completely new filming techniques to bring to life its ultra-realistic 3D action. Now computer games researchers have found a way of taking those techniques further using home computers and motion controllers. James Cameron invented a new way of filming called Simul-cam, where the image recorded is processed in real-time before it reaches the director’s monitor screen. This allows actors in motion-capture suits to be instantly seen as the blue Na’vi characters, without days spent creating computer-generated images. The Abertay researchers, have linked the power of a virtual camera – where a computer dramatically enhances what a film camera could achieve – using a motion-sensor.

This allows completely intuitive, immediately responsive camera actions within any computer-generated world. The applications of the project are substantial. Complex films and animations could be produced at a very low cost, giving new creative tools to small studios or artists at home. Computer environments can be manipulated in the same way as a camera, opening new opportunities for games, and for education. This tool uses electromagnetic sensors to capture the controller’s position to a precise single millimetre accuracy, and unlike other controllers still works even when an object is in the way. It will work on any home PC, and is expected to retail for under £100 from early 2011. A patent application for the invention and unique applications of the technology has been recently filed in the UK.

More information:

http://www.abertay.ac.uk/about/news/newsarchive/2010/name,6983,en.html

11 November 2010

Robotic Limbs that Plug into the Brain

Most of the robotic arms now in use by some amputees are of limited practicality; they have only two to three degrees of freedom, allowing the user to make a single movement at a time. And they are controlled with conscious effort, meaning the user can do little else while moving the limb. A new generation of much more sophisticated and lifelike prosthetic arms, sponsored by the Department of Defense's Defense Advanced Research Projects Agency (DARPA), may be available within the next five to 10 years. Two different prototypes that move with the dexterity of a natural limb and can theoretically be controlled just as intuitively--with electrical signals recorded directly from the brain--are now beginning human tests. The new designs have about 20 degrees of independent motion, a significant leap over existing prostheses, and they can be operated via a variety of interfaces. One device, developed by DEKA Research and Development, can be consciously controlled using a system of levers in a shoe.

In a more invasive but also more intuitive approach, amputees undergo surgery to have the remaining nerves from their lost limbs moved to the muscles of the chest. Thinking about moving the arm contracts the chest muscles, which in turn moves the prosthesis. But this approach only works in those with enough remaining nerve capacity, and it provides a limited level of control. To take full advantage of the dexterity of these prostheses, and make them function like a real arm, scientists want to control them with brain signals. Limited testing of neural implants in severely paralyzed patients has been underway for the last five years. About five people have been implanted with chips to date, and they have been able to control cursors on a computer screen, drive a wheelchair, and even open and close a gripper on a very simple robotic arm. More extensive testing in monkeys implanted with a cortical chip shows the animals can learn to control a relatively simple prosthetic arm in a useful way, using it to grab and eat a piece of marshmallow.

More information:

http://www.technologyreview.com/biomedicine/26622/

08 November 2010

Moving Holograms

A team led by optical sciences developed a new type of holographic telepresence that allows the projection of a three-dimensional, moving image without the need for special eyewear such as 3D glasses or other auxiliary devices. The technology is likely to take applications ranging from telemedicine, advertising, updatable 3D maps and entertainment to a new level. Holographic telepresence means we can record a three-dimensional image in one location and show it in another location, in real-time, anywhere in the world. The prototype device uses a 10-inch screen, but researchers are already successfully testing a much larger version with a 17-inch screen. The image is recorded using an array of regular cameras, each of which views the object from a different perspective. The more cameras that are used, the more refined the final holographic presentation will appear.

That information is then encoded onto a fast-pulsed laser beam, which interferes with another beam that serves as a reference. The resulting interference pattern is written into the photorefractive polymer, creating and storing the image. Each laser pulse records an individual hogel (or holographic pixel) in the polymer. A hogel is the three-dimensional version of a pixel, the basic units that make up the picture. The hologram fades away by natural dark decay after a couple of minutes or seconds depending on experimental parameters. Or it can be erased by recording a new 3D image, creating a new diffraction structure and deleting the old pattern. The overall recording setup is insensitive to vibration because of the short pulse duration and therefore suited for industrial environment applications without any special need for vibration, noise or temperature control. Potential applications of holographic telepresence include advertising, updatable 3D maps, telemedicine and entertainment.

More information:

http://uanews.org/node/35220