29 October 2013

Effective Motion Tracking Technology

Researchers at Carnegie Mellon University and Disney Research Pittsburgh have devised a motion tracking technology that could eliminate much of the annoying lag that occurs in existing video game systems that use motion tracking, while also being extremely precise and highly affordable. Called Lumitrack, the technology has two components -- projectors and sensors. A structured pattern, which looks something like a very large barcode, is projected over the area to be tracked. 


Sensor units, either near the projector or on the person or object being tracked, can then quickly and precisely locate movements anywhere in that area. Lumitrack also is extremely precise, with sub-millimeter accuracy. Moreover, this performance is achieved at low cost. The sensors require little power and would be inexpensive to assemble in volume. The components could even be integrated into mobile devices, such as smartphones.

More information:

21 October 2013

Automatic Speaker Tracking

A central topic in spoken-language-systems research is what’s called speaker diarization, or computationally determining how many speakers feature in a recording and which of them speaks when. Speaker diarization would be an essential function of any program that automatically annotated audio or video recordings. To date, the best diarization systems have used what’s called supervised machine learning: They’re trained on sample recordings that a human has indexed, indicating which speaker enters when. However, MIT researchers describe a new speaker-diarization system that achieves comparable results without supervision: No prior indexing is necessary.


Moreover, one of the MIT researchers’ innovations was a new, compact way to represent the differences between individual speakers’ voices, which could be of use in other spoken-language computational tasks. To create a sonic portrait of a single speaker, Glass explains, a computer system will generally have to analyze more than 2,000 different speech sounds; many of those may correspond to familiar consonants and vowels, but many may not. To characterize each of those sounds, the system might need about 60 variables, which describe properties such as the strength of the acoustic signal in different frequency bands.

More information:

20 October 2013

Kinect of the Future

Massachusetts Institute of Technology researchers have developed a device that can see through walls and pinpoint a person with incredible accuracy. They call it the ‘Kinect of the future’, after Microsoft's Xbox 360 motion sensing camera. The project from MIT's Computer Science and Artificial Laboratory (CSAIL) used three radio antennas spaced about a meter apart and pointed at a wall. A desk cluttered with wires and circuits generated and interpreted the radio waves. On the other side of the wall a single person walked around the room and the system represented that person as a red dot on a computer screen. The system tracked the movements with an accuracy of plus or minus 10 centimeters, which is about the width of an adult hand.


In the room where users walked around there was white tape on the floor in a circular design. The tape on the floor was also in the virtual representation of the room on the computer screen. It wasn't being used an aid to the technology, rather it showed onlookers just how accurate the system was. As testers walked on the floor design their actions were mirrored on the computer screen. One of the drawbacks of the system is that it can only track one moving person at a time and the area around the project needs to be completely free of movement. That meant that when the group wanted to test the system they would need to leave the room with the transmitters as well as the surrounding area; only the person being tracked could be nearby.

More information:

13 October 2013

The Human Brain Project

Six months after its selection by the EU as one of its FET Flagships, this project of unprecedented complexity, co-funded by the EU with an estimated budget of €1.2 billion, has now been set in motion. With more than 130 research institutions from Europe and around the world on board and hundreds of scientists in a myriad of fields participating, the Human Brain Project is the most ambitious neuroscience project ever launched. Its goal: develop methods that will enable a deep understanding of how the human brain operates. The knowledge gained will be a key element in developing new medical and information technologies. The Human Brain Project’s initial mission is to launch its six research platforms, each composed of technological tools and methods that ensure that the project’s objectives will be met. These are: neuroinformatics, brain simulation, high-performance computing, medical informatics, neuromorphic computing and neurorobotics. Over the next 30 months, scientists will set up and test the platforms. Then, starting in 2016, the platforms will be ready to use by Human Brain Project scientists as well as researchers from around the world. These resources — simulations, high-performance computing, neuromorphic hardware, databases — will be available on a competitive basis, in a manner similar to that of other major research infrastructures, such as the large telescopes used in astronomy. In the field of neuroscience, the researchers will have to manage an enormous amount of data — in particular, the data that are published in thousands of scientific articles every year. 


The mission of the neuroinformatics platform will be to extract the maximum amount of information possible from these sources and integrate it into a cartography that encompasses all the brain’s organizational levels, from the individual cell all the way up to the entire brain. This information will be used to develop the brain simulation platform. The high-performance computing platform must ultimately be capable of deploying the necessary computational power to bring these ambitious developments about. Medical doctors associated with the project are charged with developing the best possible methods for diagnosing neurological disease. Being able to detect and identify pathologies very rapidly will allow patients to benefit from personalized treatment before potentially irreversible neurological damage occurs. This is the mission of the medical informatics platform, which will initially concentrate on compiling and analyzing anonymized clinical data from hundreds of patients in collaboration with hospitals and pharmaceutical companies. The Human Brain Project includes an important component whose objective is to create neuro-inspired technologies. Microchips are being developed that imitate how networks of neurons function — the idea being to take advantage of the extraordinary learning ability and resiliency of neuronal circuits in a variety of specific applications. This is the mission of the neuromorphic computing platform.

More information:

10 October 2013

UltraHaptics

Multi-touch surfaces offer easy interaction in public spaces, with people being able to walk-up and use them.  However, people cannot feel what they have touched.  A team from the University of Bristol’s Interaction and Graphics (BIG) research group has developed a solution that not only allows people to feel what is on the screen, but also receive invisible information before they touch it. UltraHaptics, is a system designed to provide multipoint, mid-air haptic feedback above a touch surface. UltraHaptics uses the principle of acoustic radiation force where a phased array of ultrasonic transducers is used to exert forces on a target in mid-air.  Haptic sensations are projected through a screen and directly onto the user’s hands. The use of ultrasonic vibrations is a new technique for delivering tactile sensations to the user.  A series of ultrasonic transducers emit very high frequency sound waves.


When all of the sound waves meet at the same location at the same time, they create sensations on a human’s skin. By carrying out technical evaluations, the team has shown that the system is capable of creating individual points of feedback that are far beyond the perception threshold of the human hand.  The researchers have also established the necessary properties of a display surface that is transparent to 40 kHz ultrasound. The results from two user studies have demonstrated that feedback points with different tactile properties can be distinguished at smaller separations.  The researchers also found that users are able to identify different tactile properties with training. Finally, the research team explored three new areas of interaction possibilities that UltraHaptics can provide: mid-air gestures, tactile information layers and visually restricted displays, and created an application for each.

More information:

07 October 2013

Putting Face on Robots

A new study from the Georgia Institute of Technology finds that older and younger people have varying preferences about what they would want a personal robot to look like. And they change their minds based on what the robot is supposed to do. Participants were shown a series of photos portraying either robotic, human or mixed human-robot faces and were asked to select the one that they would prefer for their robot’s appearance. 


Most college-aged adults in the study preferred a robotic appearance, although they were also generally open to the others. However, nearly 60 percent of older adults said they would want a robot with a human face, and only 6 percent of them chose one with a mixed human-robot appearance. But the preferences in both age groups wavered a bit when participants were told the robot was assisting with personal care, chores, social interaction or for helping to make decisions.

More information:

06 October 2013

Smart Cities

An old port city on Spain's Bay of Biscay has emerged as a prototype for high-tech smart cities worldwide. Blanketed with sensors, it's changing the way its residents live. Apart from the occasional ferry from Britain, this picturesque town doesn't attract many foreign visitors. It turned quite a few heads, then, when delegations from Google, Microsoft and the Japanese government all landed there recently to walk the city streets.


What they've been coming to see, though, is mostly invisible: 12,000 sensors buried under the asphalt, affixed to street lamps and atop city buses. Silently they survey parking availability, and whether the surf's up at local beaches. They can even tell garbage collectors which bins are full, and automatically dim street lights when no one's around. Santander is one of four cities - the three others are in Britain, Germany and Serbia - where sensors are being tested.

More information:

04 October 2013

Self-Assembling Robots

Known as M-Blocks, the robots are cubes with no external moving parts. Nonetheless, they’re able to climb over and around one another, leap through the air, roll across the ground, and even move while suspended upside down from metallic surfaces. Inside each M-Block is a flywheel that can reach speeds of 20,000 revolutions per minute; when the flywheel is braked, it imparts its angular momentum to the cube. On each edge of an M-Block, and on every face, are cleverly arranged permanent magnets that allow any two cubes to attach to each other. Researchers studying reconfigurable robots have long used an abstraction called the sliding-cube model. In this model, if two cubes are face to face, one of them can slide up the side of the other and without changing orientation, slide across its top. The sliding-cube model simplifies the development of self-assembly algorithms, but the robots that implement them tend to be much more complex devices. To compensate for its static instability, the researchers’ robot relies on some ingenious engineering. On each edge of a cube are two cylindrical magnets, mounted like rolling pins. 


When two cubes approach each other, the magnets naturally rotate, so that north poles align with south, and vice versa. Any face of any cube can thus attach to any face of any other. The cubes’ edges are also beveled, so when two cubes are face to face, there’s a slight gap between their magnets. When one cube begins to flip on top of another, the bevels, and thus the magnets, touch. The connection between the cubes becomes much stronger, anchoring the pivot. On each face of a cube are four more pairs of smaller magnets, arranged symmetrically, which help snap a moving cube into place when it lands on top of another. But the researchers believe that a more refined version of their system could prove useful even at something like its current scale. Armies of mobile cubes could temporarily repair bridges or buildings during emergencies, or raise and reconfigure scaffolding for building projects. They could assemble into different types of furniture or heavy equipment as needed. And they could swarm into environments hostile or inaccessible to humans, diagnose problems, and reorganize themselves to provide solutions.

More information: