28 August 2011

Build Music With Blocks

Researchers at the University of Southampton have developed a new way to generate music and control computers. Audio d-touch, is based into tangible user interfaces, or TUIs, gives physical control in the immaterial world of computers. It uses a standard computer and a web cam. Through using simple computer vision techniques, physical blocks are tracked on a printed board. The position of the blocks then determines how the computer samples and reproduces sound.

Audio d-touch is more than just for play: TUIs are an alternative to virtual worlds. Human-Computer Interaction researchers are investigating ways to move away from the online, purely digital world and rediscover the richness of our sense of touch. All that is needed is a regular computer equipped with a web-cam and a printer. The user creates physical interactive objects and attaches printed visual markers recognized by Audio d-touch. The software platform is open and can be extended for applications beyond music synthesis.

More information:


27 August 2011

Virtual Touch Feels Tumours

Tactile feedback technology could give keyhole surgeons a virtual sense of feeling tumours while operating. A Leeds University study has combined computer virtualisation with a device that simulates pressure on a surgeon's hand when touching human tissue remotely. This could enable a medic to handle a tumour robotically, and judge if it is malignant or benign. Cancer specialists hope the new system will help to improve future treatment. In current keyhole procedures, a surgeon operates through a tiny incision in the patient's body, guided only by video images. Using keyhole techniques, as opposed to major invasive surgery, helps improve healing and patient recovery. However, surgeons can't feel the tissue they are operating on - something which might help them to find and categorise tumours.

The team of undergraduates at Leeds University has devised a solution that combines a computer-generated virtual simulation with a hand-held haptic feedback device. The system works by varying feedback pressure on the user's hand when the density of the tissue being examined changes. In tests, team members simulated tumours in a human liver using a soft block of silicon embedded with ball bearings. The user was able to locate these lumps using haptic feedback. Engineers hope this will one day allow a surgeon to feel for lumps in tissue during surgery. The project has just been declared one of four top student designs in a global competition run by US technology firm National Instruments.

More information:


22 August 2011


Until recently, most robots could be thought of as belonging to one of two phyla. The Widgetophora, equipped with claws, grabs and wheels, stuck to the essentials and did not try too hard to look like anything other than machines. The Anthropoidea, by contrast, did their best to look like their creators—sporting arms with proper hands, legs with real feet, and faces. The few animal-like robots that fell between these extremes were usually built to resemble pets and were, in truth, not much more than just amusing toys. They are toys no longer, though, for it has belatedly dawned on robot engineers that they are missing a trick. The great natural designer, evolution, has come up with solutions to problems that neither the Widgetophora nor the Anthropoidea can manage. Why not copy these proven models, the engineers wondered, rather than trying to outguess 4 billion years of natural selection? The result has been a flourishing of animal-like robots. It is not just dogs that engineers are copying now, but shrews complete with whiskers, swimming lampreys, grasping octopuses, climbing lizards and burrowing clams.

They are even trying to mimic insects, by making robots that take off when they flap their wings. As a consequence, the Widgetophora and the Anthropoidea are being pushed aside. The phylum Zoomorpha is on the march. Researchers at the Sant’Anna School of Advanced Studies in Pisa are a good example of this trend. They lead an international consortium that is building a robotic octopus. The hug of a monopus.To create their artificial cephalopod they started with the animal’s literal and metaphorical killer app: its flexible, pliable arms. In a vertebrate’s arms, muscles do the moving and bones carry the weight. An octopus arm, though, has no bones, so its muscles must do both jobs. Its advantage is that, besides grasping things tightly, it can also squeeze into nooks and crannies that are inaccessible to vertebrate arms of similar dimensions. After studying how octopus arms work, researchers have come up with an artificial version that behaves the same way. Its outer casing is made of silicone and is fitted with pressure sensors so that it knows what it is touching. Inside this casing are cables and springs made of a specially elastic nickel-titanium alloy. The result can wrap itself around an object with a movement that strikingly resembles that of the original.

More information:


21 August 2011

Chips That Behave Like Brains

Computers, like humans, can learn. But when Google tries to fill in your search box based only on a few keystrokes, or your iPhone predicts words as you type a text message, it's only a narrow mimicry of what the human brain is capable. The challenge in training a computer to behave like a human brain is technological and physiological, testing the limits of computer and brain science. But researchers from IBM Corp. say they've made a key step toward combining the two worlds. The company announced Thursday that it has built two prototype chips that it says process data more like how humans digest information than the chips that now power PCs and supercomputers.

The chips represent a significant milestone in a six-year-long project that has involved 100 researchers and some $41 million in funding from the government's Defense Advanced Research Projects Agency, or DARPA. IBM has also committed an undisclosed amount of money. The prototypes offer further evidence of the growing importance of "parallel processing," or computers doing multiple tasks simultaneously. That is important for rendering graphics and crunching large amounts of data. The uses of the IBM chips so far are prosaic, such as steering a simulated car through a maze, or playing Pong. It may be a decade or longer before the chips make their way out of the lab and into actual products.

More information:


17 August 2011

Virtual People Get ID Checks

Using both characteristics, researchers hope to develop techniques for checking whether the digital characters are who they claim to be. Such information could be used in situations where login details are not visible or for law enforcement. Impersonation of avatars is expected to become a growing problem as real life and cyberspace increasingly merge. Avatars are typically used to represent players in online games such as World of Warcraft and in virtual communities like Second Life. As their numbers grow, it will become important to find ways to identify those we meet regularly, according to researchers from the University of Louisville. Working out if their controller is male or female has an obvious commercial benefit.

But discovering that the same person controlled different avatars in separate spaces would be even more useful. As robots proliferate we will need ways of telling one from the other. The technology may also have implications for security if a game account is hacked and stolen. Behavioural analysis could help prove whether an avatar is under the control of its usual owner by watching to see if it acts out of character. The research looked at monitoring for signature gestures, movements and other distinguishing characteristics. Researchers discovered that the lack of possible variations on a avatar's digital face, when compared to a real human, made identification tricky. However, those limited options are relatively simple to measure, because of the straightforward geometries involved in computer-generated images.

More information:


16 August 2011

Computers Synthesize Sounds

Computer-generated imagery usually relies on recorded sound to complete the illusion. Recordings can, however, limit the range of sounds you can produce, especially in future virtual reality environments where you can't always know ahead of time what the action will be. Researchers developed computer algorithms to synthesize sound on-the-fly based on simulated physics models. Now they have devised methods for synthesizing more realistic sounds of hard objects colliding and the roar of fire. To synthesize collision sounds, the computer calculates the forces computer-generated objects would exert if they were real, how those forces would make the objects vibrate and how those vibrations transfer to the air to make sound. Previous efforts often assumed that the contacting objects were rigid, but in reality, there is no such thing as a rigid object, researchers say. Objects vibrate when they collide, which can produce further chattering and squeaking sounds.

Resolving all the frictional contact events between rapidly vibrating objects is computationally expensive. To speed things up, their algorithm simulates only the fraction of contacts and vibrations needed to synthesize the sound. Demonstrations include the sound of a ruler overhanging the edge of table and buzzing when plucked, pounding on a table to make dishes clatter and ring and the varied sounds of a Rube Goldberg machine that rolls marbles into a cup that moves a lever that pushes a bunny into a shopping cart that rolls downhill. Fire is animated by mimicking the chemical reactions and fluid-like flow of burning gases. But flame sounds come from things that happen very rapidly in the expanding gases, and computer animators do not need to model those costly details to get good-looking flames. They demonstrated with a fire-breathing dragon statue, a candle in the wind, a torch swinging through the air, a jet of flame injected into a small chamber and a burning brick. The last simulation was run with several variations of the sound-synthesis method, and the results compared with a high-speed video and sound recording of a real burning brick.

More information:


06 August 2011

Robots With Ability to Learn

Researchers with the Hasegawa Group at the Tokyo Institute of Technology have created a robot that is capable of applying learned concepts to perform new tasks. Using a type of self-replicating neural technology they call the Self-Organizing Incremental Neural Network (SOINN), the team has released a video demonstrating the robot’s ability to understand it’s environment and to carry out instructions that it previously didn’t know how to do. The robot, apparently not named because it’s not the robot itself that is being demonstrated, but the neural technology behind what it’s able to achieve, is capable of figuring out what to do next in a given situation by storing information in a network constructed to mimic the human brain. For example, the team demonstrates its technology by asking the robot to fill a cup with water from a bottle, which it does quite quickly and ably. This part is nothing new, the robot is simply following predefined instructions. On the next go round however, the robot is asked to cool the beverage while in the middle of carrying out the same instructions as before. This time, the robot has to pause to consider what it must do to carry out this new request. It immediately sees that it cannot carry out the new request under the current circumstances because both of its hands are already being used (one to hold the cup, the other the bottle) so, it sets the bottle down then reaches over to retrieve an ice cube which it promptly deposits in the cup.

This little demonstration, while not all that exciting to watch, represents a true leap forward in robotics technology and programming. Being able to learn means that the robot can be programmed with just a very basic set of pre-knowledge that is then built upon for as long as the robot exists, without additional programming; not unlike how human beings start out with very little information at birth and build upon what they know and are able to do over a lifetime. The robot has an advantage though, because not only is it able to learn from its own experiences, but from others as well all over the world. This is because it can be connected to the internet where it can research how to do things, just as we humans already do. But, in addition to that it could conceivably also learn from other robots just like it that have already learned how to do the thing that needs doing. As an example, one of the research team members describes a situation where a robot is given to an elderly man as a nurse and is then asked to make him some tea. If the robot doesn’t know how, it could just ask another robot online who does. Remarkably, the first robot could do so even if he (it) is trying to make English tea and the robot who answers the internet query has made only Japanese tea before. The lessons the first robot has learned over time would allow him to adapt, and that’s why this breakthrough is so important, because it means given enough time and experience, robots may soon finally be able to do all those things we’ve been watching them do in science fiction movies, and likely, more.

More information:


01 August 2011

Turning Thought into Motion

Brain cap technology being developed at the University of Maryland allows users to turn their thoughts into motion. Researchers have created a non-invasive, sensor-lined cap with neural interface software that soon could be used to control computers, robotic prosthetic limbs, motorized wheelchairs and even digital avatars. The potential and rapid progression of the UMD brain cap technology can be seen in a host of recent developments, including a just published study in the Journal of Neurophysiology, new grants from the National Science Foundation (NSF) and National Institutes of Health, and a growing list of partners that includes the University of Maryland School of Medicine, the Veterans Affairs Maryland Health Care System, the Johns Hopkins University Applied Physics Laboratory, Rice University and Walter Reed Army Medical Center's Integrated Department of Orthopaedics & Rehabilitation.

Researchers use EEG to non-invasively read brain waves and translate them into movement commands for computers and other devices. They are also collaborating on a rapidly growing cadre projects with researchers at other institutions to develop thought-controlled robotic prosthetics that can assist victims of injury and stroke. They have tracked the neural activity of people on a treadmill doing precise tasks like stepping over dotted lines. The researchers are matching specific brain activity recorded in real time with exact lower-limb movements. This data could help stroke victims in several ways. People who are less mobile commonly suffer from other health issues such as obesity, diabetes or cardiovascular problems, so they are moving by whatever means possible. The second use of the EEG data in stroke victims offers exciting possibilities by decoding the motion of a normal gait.

More information: