30 January 2012

Life-Like Robot

Canadian scientists are developing a robot that mimics the human face's expressions and human hand’s tactile processes, which they say will be useful in areas like nursing, nuclear plant maintenance, and explosive device disposal. A key part of the technology is a new biology-inspired touch-sensitive artificial skin that is able to sense contact, as well as the profile, temperature and elasticity of object surfaces, ultimately raising the tactile sensitivity of robots to the human level. The artificial skin is made of elastic silicon and embedded with tactical and temperature sensors. Researchers are using a robot as their test subject, methodically replacing its mechanical parts with more life-like parts they are designing.


They will start with the head and then the hands. They are designing some of the mechanical and electronic sensor elements for devices, such as intricate prosthetic limbs that can covey large amounts of information through a sense of touch. Researchers are also mounting a set of actuators on various parts of a newly acquired, anatomically correct model of a human skull — complete with a spring-loaded jaw that replicates the movement of the lower face. The actuators will then be covered with an elastic skin. The aim is to produce a highly life-like face, capable of representing complex human expressions ranging from surprise to anger.

More information:

http://www.cbc.ca/news/canada/ottawa/story/2012/01/20/tech-robot-ottawa.html

21 January 2012

Robots for Brain Surgery

An EU-funded team of researchers has developed a robot able to help neurosurgeons in performing keyhole brain surgery. This robot is accurate in performance and has incredible memory, especially since it has 13 types of movement compared to the 4 available to human hands, as well as 'haptic' feedback - physical cues allowing physicians to assess tissue and perceive the amount of force applied during surgery. The ROBOCAST ('Robot and sensors integration as guidance for enhanced computer assisted surgery and therapy') project received EUR 3.45 million under the 'Information and communication technologies' (ICT) Theme of the EU's Seventh Framework Programme (FP7). Led by the Politecnico di Milano in Italy, the ROBOCAST partners targeted the development of ICT scientific methods and techniques for support in keyhole brain surgery. They developed a hardware experts call mechatronics, which constructs the robot's body and nervous system, as well as software that offers intelligence. The software comprises a multiple robot, an independent trajectory planner, an advanced controller and a set of field sensors. The ROBOCAST consortium developed the mechatronic phase of the project as a modular system with two robots and one active biomimetic probe. These were integrated into a sensory motor framework to run as one unit.


The first robot has the ability to find its miniature companion robot through six degrees of freedom (DOF), and moves from left to right, up and down, and backward and forward. It also has three rotational movements, namely forward and backward, side to side, or left to right. These all work together to locate the robot's companion anywhere in a three-dimensional space. The robot, say the researchers, can also ease the tremor of a surgeon's hands by up to 10 times. The miniature robot holds the probe that is used through the keyhole. The partners say optical trackers are located at the end of the probe, as well as on the patient. The force applied is managed by the robot, which also controls the position by applying a combination of sensors. This results in determining the trajectory of the surgical work. The robot was tested for its accurate performance during keyhole surgery tests on dummies. The team believes this robot can be used to help physicians treat their patients for epilepsy, Tourette's syndrome and Parkinson's disease. The researchers say the path the robot follows inside the brain is determined on the basis of a risk atlas as well as using the evaluation of preoperative diagnostic information. Presenting a robot model earlier this year, the ROBOCAST team comprises experts from Germany, Israel, Italy and the United Kingdom. Future research plans include investigating robotic neurosurgery for patients who would remain conscious during their surgery.

More information:

http://www.robocast.eu/

http://cordis.europa.eu/fetch?CALLER=EN_NEWS_FP7&ACTION=D&DOC=9&CAT=NEWS&QUERY=0134fb57071f:ca0b:2083cf71&RCN=34211

20 January 2012

Faster FFT

The Fourier transform is one of the most fundamental concepts in the information sciences. It’s a method for representing an irregular signal — such as the voltage fluctuations in the wire that connects an MP3 player to a loudspeaker — as a combination of pure frequencies. It’s universal in signal processing, but it can also be used to compress image and audio files, solve differential equations and price stock options, among other things. The reason the Fourier transform is so prevalent is an algorithm called the fast Fourier transform (FFT), devised in the mid-1960s, which made it practical to calculate Fourier transforms on the fly. Ever since the FFT was proposed, however, people have wondered whether an even faster algorithm could be found. At the Association for Computing Machinery’s Symposium on Discrete Algorithms (SODA) this week, a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smartphones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments. Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers — discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies. ‘Weighted’ means that some of those frequencies count more toward the total than others.


Indeed, many of the frequencies may have such low weights that they can be safely disregarded. That’s why the Fourier transform is useful for compression. An eight-by-eight block of pixels can be thought of as a 64-sample signal, and thus as the sum of 64 different frequencies. But as the researchers point out in their new paper, empirical studies show that on average, 57 of those frequencies can be discarded with minimal loss of image quality. Signals whose Fourier transforms include a relatively small number of heavily weighted frequencies are called ‘sparse’. The new algorithm determines the weights of a signal’s most heavily weighted frequencies; the sparser the signal, the greater the speedup the algorithm provides. Indeed, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its entirety. The new algorithm relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight. In signal processing, the basic tool for isolating particular frequencies is a filter. But filters tend to have blurry boundaries: One range of frequencies will pass through the filter more or less intact; frequencies just outside that range will be somewhat attenuated; frequencies outside that range will be attenuated still more; and so on, until you reach the frequencies that are filtered out almost perfectly. If it so happens that the one frequency with a heavy weight is at the edge of the filter, however, it could end up so attenuated that it can’t be identified. So the researchers’ first contribution was to find a computationally efficient way to combine filters so that they overlap, ensuring that no frequencies inside the target range will be unduly attenuated, but that the boundaries between slices of spectrum are still fairly sharp.

More information:

http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html

17 January 2012

What Are Memories Made Of?

Neuroscientists have discovered that memories migrate between different regions of the brain, but what do they actually consist of? Imagine being unable to remember the past. Like a fading dream, your current consciousness is lost to eternity. This is the experience of someone suffering from amnesia. Despite otherwise being healthy, they are unable to commit new experiences to memory. Studying the brains of amnesic patients has revealed that, while most regions of the brain play a role in memory, some areas are more crucial than others. There appears to be no single memory store, but instead a diverse taxonomy of memory systems, each with its own special circuitry evolved to package and retrieve that type of memory. Memories are not static entities; over time they shift and migrate between different territories of the brain. At the top of the taxonomical tree, a split occurs between declarative and non-declarative memories. Declarative memories are those you can state as true or false, such as remembering whether you rode a bicycle to work. Non-declarative memories are those that cannot be described as true or false, such as knowing how to ride a bicycle. A central hub in the declarative memory system is a brain region called the hippocampus. This undulating, twisted structure gets its name from its resemblance to a sea horse. Destruction of the hippocampus, through injury, neurosurgery or the ravages of Alzheimer's disease, can result in an amnesia so severe that no events experienced after the damage can be remembered.


However, amnesic patients can show an astounding array of mnemonic abilities, such as learning new skills and habits. For example, repeatedly following a particular route to work can slowly be learned. Such ingrained habits appear to rely on a brain region called the striatum. Amnesic patients can also show an impressive short-term memory. For example, if they concentrate on one piece of information, such as a phone number, they can hold it in mind for many minutes. This ability relies on regions in the neocortex (the convoluted grey matter you see looking at a brain from the outside). Despite being unable to form new long-term memories, many amnesic patients can still access long-term memories formed before the brain damage was inflicted. The further back in time the memory was created the more likely it is to survive, which results in the uncanny situation where patients cannot remember what they have just done, but are able to reminisce at length about their distant past. It is thought this occurs because the brain doesn't just create, store and retrieve memories; it restructures them. A popular view is that during sleep your hippocampus "broadcasts" its recently captured memories to the neocortex, which updates your long-term store of past experience and knowledge. Eventually the neocortex is sufficient to support recall without relying on the hippocampus. However, there is evidence that if you need to vividly picture a scene in your mind, this appears to require the hippocampus, no matter how old the memory. We have recently discovered that the hippocampus is not only needed to reimagine the past, but also to imagine the future.

More information:

http://www.guardian.co.uk/lifeandstyle/2012/jan/14/what-are-memories-made-of

13 January 2012

Touchy-Feely Technology

Today, researchers in human-computer interaction, a field of computer science that has seen a spike in consumer demand thanks to a new, seemingly ubiquitous technology: Touch. According to the technology, media and telecommunications company IHS iSuppli, global shipments of touch-screen cellphones and tablets have gone from 244 million units to 630 million units in just two years. This year, iPad sales nearly quadrupled compared to 2010. The touch explosion has been long in the making, it's part of a theory he calls The Long Nose of Innovation and it says that much of the innovation behind any technological breakthrough actually takes place over a long period of time.


According to Apple, more than 2,300 school districts in the U.S. have iPad programs for students or teachers. But the benefits of having iPads in the classroom don't come free. Teachers say you have to invest time into the technology in order to get something out of it, which means much of the iPad's usefulness will depend on the applications both teachers and publishers discover as adoption grows. Hospitals are also exploring the usefulness of iPads. At the University of California, San Diego Hospital, physician's assistants use iPad 2 to update a patient who just received a brand new kidney on his recovery.

More information:

http://www.npr.org/2011/12/26/144146395/the-touchy-feely-future-of-technology

11 January 2012

Learning, Teaching via Video Games

In one game, a snake slithers across the screen, eating arithmetic symbols until they equal a desired number. In another, an adventuring character wanders around a virtual world, encountering problem-solving exercises in basic logic. For 17 seniors in a computer science class at the University of Delaware, these probably wouldn't be the games they'd choose to play, but that isn't the point. Instead, the video games created by five teams will help teach middle school students at Chester (Pa.) Community Charter School. A $400,000 National Science Foundation grant has funded the project over the past three years. Education experts called the initiative an innovative approach to expanding access to educational technology.


And the idea could inspire other universities looking to support schools in their local community. The UD students created the games for a special type of laptop geared toward classroom use, but they hope to make the games available as free downloads on a website or an app-store platform. The basis for this work: Since so many children spend so much time gaming anyway, why not make it educational? Many educators latched on to the idea of using technology in education several years ago. The question now is how to deliver it properly. Like any resource in education, wealthier students tend to gain easier and more effective access to gaming in education. Gee and other experts worry students in struggling schools will fall behind without combining video games with smaller class sizes and more parental involvement.

More information:

http://www.delawareonline.com/article/20120109/NEWS03/201090316/Learning-teaching-via-video-games?odyssey=modnewswelltextp

10 January 2012

Simulating Firefighting Operations

Firefighters often put their lives at risk during operations, so it is essential they have reliable tools to help them do their job. Now, a modular simulation kit is set to help develop new information and communication technologies – and ensure they are tailored to firefighters’ needs from the outset. It takes the highest levels of concentration for emergency workers to fight their way through smoke-filled buildings wearing breathing apparatus and protective suits. What is the location of the casualties? Where is the nearest exit, in case the crews need to get to safety? Up to now, they have used ropes to retrace their steps, but these can get caught up or wrap themselves around obstacles. Chalk is used to mark which rooms have already been searched, but these markings are often difficult to see through the smoke. What is needed are new technologies such as sensor-based systems to support the emergency crews during operations where visibility is limited. But such systems, too, carry their own risks: having too much information to hand might confuse crews and be a hindrance. That is why researchers at the Fraunhofer Institute for Applied Information Technology FIT in Sankt Agustin have now developed a set of special simulation methods and tools. These will allow emergency services to test technologies in a realistic environment while they are still in the development phase, so they can tailor them to their specific requirements long before they are needed in earnest. It also gives crews the chance to get used to unfamiliar sources of information while on safe ground. The FireSim method kit is made up of four simulation modules.


The first comprises a role-playing board game which emergency workers can use to play out operations. Players move around on a map of the emergency scene, and the new technologies are represented by special tokens. This allows crews to try out new ideas with a minimum of effort. The second module is like a computer game. Various firefighters each sit at a PC, and on the screen they see the emergency scene from a first-person perspective. The players move through virtual space, opening doors and rescuing the injured, and trying out virtual prototypes of novel support systems – such as sensor nodes that mark out the paths that have already been followed and which rooms have been searched. These simulations allow us to make rapid changes to prototypes and put them to the test in complex deployment scenarios. Since we want to take the whole hierarchy into account, we recreate all communication and coordination processes in the simulation as far as we can. The third simulation module blends the virtual and the real, with emergency crews playing out a scenario in a real environment, for instance to rescue someone from a smoke-filled building. They carry with them a system that is integrated into their suit, such as a display in their helmet or on their arm, and provides details of their location and bearings. Meanwhile, a virtual simulation runs in parallel, with helpers reenacting all the emergency workers’ real actions. New technologies such as the sensor nodes are simulated and the results sent by radio to the firefighters’ displays. In this way, systems of which no physical prototype has yet been built can already be tested in a real environment.

More information:

http://www.fraunhofer.de/en/press/research-news/2012/january/simulating-firefighting-op.html

09 January 2012

3D Cameras for Cellphones

When Microsoft’s Kinect — a device that lets Xbox users control games with physical gestures — hit the market, computer scientists immediately began hacking it. A black plastic bar about 11 inches wide with an infrared rangefinder and a camera built in, the Kinect produces a visual map of the scene before it, with information about the distance to individual objects. At MIT alone, researchers have used the Kinect to create a “Minority Report”-style computer interface, a navigation system for miniature robotic helicopters and a holographic-video transmitter, among other things. Now imagine a device that provides more-accurate depth information than the Kinect, has a greater range and works under all lighting conditions — but is so small, cheap and power-efficient that it could be incorporated into a cellphone at very little extra cost. That’s the promise of recent work by researchers at MIT’s Research Lab of Electronics. Like other sophisticated depth-sensing devices, the MIT researchers’ system uses the “time of flight” of light particles to gauge depth: A pulse of infrared laser light is fired at a scene, and the camera measures the time it takes the light to return from objects at different distances.


Traditional time-of-flight systems use one of two approaches to build up a “depth map” of a scene. LIDAR (for light detection and ranging) uses a scanning laser beam that fires a series of pulses, each corresponding to a point in a grid, and separately measures their time of return. But that makes data acquisition slower, and it requires a mechanical system to continually redirect the laser. The alternative, employed by so-called time-of-flight cameras, is to illuminate the whole scene with laser pulses and use a bank of sensors to register the returned light. But sensors able to distinguish small groups of light particles — photons — are expensive: A typical time-of-flight camera costs thousands of dollars. The MIT researchers’ system, by contrast, uses only a single light detector — a one-pixel camera. But by using some clever mathematical tricks, it can get away with firing the laser a limited number of times. In experiments, the researchers found that the number of laser flashes — and, roughly, the number of checkerboard patterns — that they needed to build an adequate depth map was about 5 percent of the number of pixels in the final image.

More information:

http://web.mit.edu/newsoffice/2011/lidar-3d-camera-cellphones-0105.html