30 December 2013

BCI Pong Game

Few video games are more basic than Pong, but Cornell University researchers built a custom electroencephalography (EEG) device so they could control the game's on-screen paddle with their minds. The alpha waves that EEG machines read are faint electrical signals. 

 
They ran the EEG readings through an amplification circuit to filter and boost the signals. Spiking alpha waves produced during relaxation move a player's paddle up, and smaller waves, indicating concentration, move it down. The size of the waves determines how much the paddle moves.

More information:

27 December 2013

Never Forget A Face

Do you have a forgettable face? Many of us go to great lengths to make our faces more memorable, using makeup and hairstyles to give ourselves a more distinctive look. Now your face could be instantly transformed into a more memorable one without the need for an expensive makeover, thanks to an algorithm developed by researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The algorithm, which makes subtle changes to various points on the face to make it more memorable without changing a person’s overall appearance, was unveiled earlier this month at the International Conference on Computer Vision in Sydney. The system could ultimately be used in a smartphone app to allow people to modify a digital image of their face before uploading it to their social networking pages. It could also be used for job applications, to create a digital version of an applicant’s face that will more readily stick in the minds of potential employers. Conversely, it could also be used to make faces appear less memorable, so that actors in the background of a television program or film do not distract viewers’ attention from the main actors, for example. To develop the memorability algorithm, the team first fed the software a database of more than 2,000 images.
 

Each of these images had been awarded a ‘memorability score’, based on the ability of human volunteers to remember the pictures. In this way the software was able to analyze the information to detect subtle trends in the features of these faces that made them more or less memorable to people. The researchers then programmed the algorithm to make the face as memorable as possible, but without changing the identity of the person or altering their facial attributes, such as their age, gender, or overall attractiveness. Changing the width of a nose may make a face look much more distinctive, for example, but it could also completely alter how attractive the person is, and so would fail to meet the algorithm’s objectives. When the system has a new face to modify, it first takes the image and generates thousands of copies. Each of these copies contains tiny modifications to different parts of the face. The algorithm then analyzes how well each of these samples meets its objectives. Once the algorithm finds a copy that succeeds in making the face look more memorable without significantly altering the person’s appearance, it makes yet more copies of this new image, with each containing further alterations. It then keeps repeating this process until it finds a version that best meets its objectives.

More information:

22 December 2013

Meta Augmented Reality Glasses

Meta Augmented Reality Glasses are now available for pre-order on site Kickstarter. A wearable computing device that combines a dual screen 3D augmented reality display with super-low latency gestural input, this technology allows for full mapping of the user’s environment and control of the augmented reality display. Meta is an amalgamation of a Glass type user interface with Xbox Kinect type spatial tracking. This combination, while not unobtrusive, allows the wearer to use her hands to interact with virtual objects layered over reality in real-time. While the first generation of Meta glasses are presented as a useable developer kit for programmers and early-adopter technophiles, the concept Meta 2 shrinks the cameras to negligible size resulting in a wearable computing augmented reality glasses kit that is roughly the same size as present day Google Glass.


At this time, only the Windows platform is compatible with the Meta augmented reality glasses. However, the company assures us that other platforms, such as OSX and Linux, are currently in development. The Meta glasses include two individual cameras projecting at a respectable resolution of 960×540 for each eye. For comparison, Google Glass single eye resolution is listed at 640×360. And, unlike Google Glass, which simply provides a data filled pop up in the corner of one eye, the Meta immerses the user in 46 degrees (23 degrees for each eye) of augmented reality and virtual objects. The current Meta glasses developer’s kit is tethered and requires a wired connection to a Windows computer. However, the Meta 2 consumer version is expected to be wireless, a necessity for commercial success we believe.

More information:

18 December 2013

Leaner Fourier Transforms

The fast Fourier transform (FFT), one of the most important algorithms of the 20th century, revolutionized signal processing. The algorithm allowed computers to quickly perform Fourier transforms (fundamental operations that separate signals into their individual frequencies) leading to developments in audio and video engineering and digital data compression. But ever since its development in the 1960s, computer scientists have been searching for an algorithm to better it.


Last year MIT researchers did just that, unveiling an algorithm that in some circumstances can perform Fourier transforms hundreds of times more quickly than the FFT. Recently, researchers within the Computer Science and Artificial Intelligence Laboratory (CSAIL), have gone a step further, significantly reducing the number of samples that must be taken from a given signal in order to perform a Fourier transform operation.

More information:

17 December 2013

New WAVE Display Technology

The University of California, San Diego’s new WAVE display, true to its name, is shaped like an ocean wave, with a curved wall array of 35 55” LG commercial LCD monitors that end in a ‘crest’ above the viewer’s head and a trough at his or her feet. The WAVE (Wide-Angle Virtual Environment), a 5x7 array of HDTVs, is now 20’ long by nearly 12’ high. Under the leadership of researchers at the UC San Diego division of the California Institute for Telecommunications and Information Technology (Calit2) – known as the Qualcomm Institute (QI) – high-resolution computerized displays have evolved over the past decade from 2D to 3D panels and from one monitor to arrays of many monitors. They’ve transitioned from stationary structures to structures on wheels, and from thick bezels (the rim that holds the glass display) to ultra-narrow bezels. Such technology is now widely used in television newsrooms, airports and even retail stores, but not in 3D like the WAVE.

 
The WAVE was designed as part of the SCOPE project, or Scalable Omnipresent Environment, that serves as both a microscope and telescope and enables users to explore data from the nano to micro to macro to mega scale. Earlier projector-based technologies, such as the QI StarCAVE, provide the feeling of being surrounded by an image and make it possible to ‘walk through’ a model of a protein or a building, for example, but the StarCAVE requires a huge room, and is not movable or replicable. By contrast, the WAVE can be erected against a standing wall and can be moved and repliciated. WAVE content can be clearly viewed by 20 or more people at once, not possible in earlier immersive displays at UCSD. Its curved aluminum structure, is also a technical ‘fix’ for the problem of images on 3D passively polarized screens appearing as double images when placed in a large, flat array. With a curved array, the viewer can stand anywhere in front of the WAVE and experience excellent 3D with no visual distortion.

More information:

08 December 2013

Tongue Navigation System

Researchers proposed a wearable system that allows paralyzed people to navigate their worlds with just flicks of their pierced tongues. The technology, still under development, could help patients disabled from the neck down access their worlds with far greater ease and access than current assistive systems offer – and with a tongue piercing, to boot. The Tongue Drive System (TDS) works like this: a magnetic tongue stud relays the wearer’s tongue movements to a headset, which then sends the commands to a smartphone or another WiFi-connected device. The user can control almost anything that a smartphone can – and a smartphone can do a lot, including drive a wheelchair, surf the web, and adjust the thermostat.


TDS is just one of a new crop of innovative assistive technologies for paralyzed patients, along with equipment that tracks eye movements, responds to voice commands, or follows neck movements. Still, these systems have distinct limitations: the neck can tire from prolonged use, background noise muddles voice commands, and eye-tracking headsets are cumbersome. Electrodes implanted in the brain have produced some good results, but they require brain surgery. In their lab tests, researchers compared TDS to one popular assistive system known as sip-and-puff. Users of that system sip or puff air into a straw connected to their wheelchair. The airflow relays commands that move the chair either forward or backward, or to either side.

More information:

07 December 2013

The Social Robot

An increasingly important part of daily life is dealing with so-called user interfaces. Whether it's a smartphone or an airport check-in system, the user's ability to get what they want out of the machine relies on their own adaptability to unfamiliar interfaces. But what if you could simply talk to a machine the way you talk to a human being? And what if the machine could also ask you questions, or even address two different people at once? These kinds of interactive abilities are being developed at KTH Royal Institute of Technology with the help of an award-winning robotic head that takes its name from the fur hat it wears. With a computer-generated, animated face that is rear-projected on a 3D mask, Furhat is actually a platform for testing various interactive technologies, such as speech synthesis, speech recognition and eye-tracking. 


The robot can conduct conversations with multiple people, turning its head and looking each person straight in the eye, while moving its animated lips in synch with its words. The project represents the third generation of spoken dialogue systems that has been in development at KTH's Department for Speech, Music and Hearing during the last 15 years. The Furhat team aims to develop its technology for commercial use, with the help of funding from Sweden's Vinnova, a government agency that supports innovation projects. Furhat is becoming a popular research platform for scientists around the world who study human interaction with machines. It's very simple, it's potentially very cheap to make, and people want to use it in their own research areas. Furhat also has attracted attention from researchers at Microsoft and Disney.

More information:

03 December 2013

Personalised Virtual Birth Simulator

Computer scientists from the University of East Anglia are working to create a virtual birthing simulator that will help doctors and midwives prepare for unusual or dangerous births. The new programme will take into account factors such as the shape of the mother’s body and the positioning of the baby to provide patient-specific birth predictions.


The simulation software will see ultra-sound data used to re-create a geometric model of a baby’s skull and body in 3D graphics as well as the mother’s body and pelvis. Programmers are also taking into account the force from the mother pushing during labour and are even modelling a ‘virtual’ midwife’s hands which can interact with the baby’s head.

More information:

27 November 2013

3D Imaging Using Nash's Theorem

UT Dallas computer scientists have developed a technique to make 3D images that finds practical applications of a theory created by a famous mathematician. This technique uses anisotropic triangles – triangles with sides that vary in length depending on their direction – to create 3D mesh computer graphics that more accurately approximate the shapes of the original objects, and in a shorter amount of time than current techniques. These types of images are used in movies, video games and computer modeling of various phenomena, such as the flow of water or air across the Earth, the deformation and wrinkles of clothes on the human body, or in mechanical and other types of engineering designs. Researchers hope this technique will also lead to greater accuracy in models of human organs to more effectively treat human diseases, such as cancer. The technique finds a practical application of the Nash embedding theorem, which was named after mathematician John Forbes Nash Jr. The computer graphics field represents shapes in the virtual world through triangle mesh.

Traditionally, it is believed that isotropic triangles – where each side of the triangle has the same length regardless of direction – are the best representation of shapes. However, the aggregate of these uniform triangles can create edges or bumps that are not on the original objects. Because triangle sides can differ in anisotrophic images, creating images with this technique would allow the user flexibility to more accurately represent object edges or folds. Researchers found that replacing isotropic triangles with anisotropic triangles in the particle-based method of creating images resulted in smoother representations of objects. Depending on the curvature of the objects, the technique can generate the image up to 125 times faster than common approaches. Objects using anisotropic triangles are of a more accurate quality, and most noticeable to the human eye when it comes to wrinkles and movement of clothes on human representatives. The next step of this research is moving from representing the surface of 3-D objects to representing 3-D volume.

More information:

23 November 2013

New Algorithms Improve Animations

A team led by Disney Research, Zürich has developed a method to more efficiently render animated scenes that involve fog, smoke or other substances that affect the travel of light, significantly reducing the time necessary to produce high-quality images or animations without grain or noise. The method, called joint importance sampling, helps identify potential paths that light can take through a foggy or underwater scene that are most likely to contribute to what the camera – and the viewer – ultimately sees. In this way, less time is wasted computing paths that aren't necessary to the final look of an animated sequence. Light rays are deflected or scattered not only when they bounce off a solid object, but also as they pass through aerosols and liquids. The effect of clear air is negligible for rendering algorithms used to produce animated films, but realistically producing scenes including fog, smoke, smog, rain, underwater scenes, or even a glass of milk requires computational methods that account for these participating media. So-called Monte Carlo algorithms are increasingly being used to render such phenomena in animated films and special effects. These methods operate by analyzing a random sampling of possible paths that light might take through a scene and then averaging the results to create the overall effect. 


But researchers explained that not all paths are created equal. Some paths end up being blocked by an object or surface in the scene; in other cases, a light source may simply be too far from the camera to have much chance of being seen. Calculating those paths can be a waste of computing time or, worse, averaging them may introduce error, or noise, that creates unwanted effects in the animation. Computer graphics researchers have tried various ‘importance sampling’ techniques to increase the probability that the random light paths calculated will ultimately contribute to the final scene and keep noise to a minimum. Some techniques trace the light from its source to the camera; others from the camera back to the source. Some are bidirectional – tracing the light from both the camera and the source before connecting them together. Unfortunately, even such sophisticated bidirectional techniques compute the light and camera portions of the paths independently, without knowledge of each other, before connecting them together, so they are unlikely to construct full light paths that ultimately have a strong contribution to the final image. By contrast, the joint importance sampling method developed by the Disney Research team chooses the locations along the random paths with mutual knowledge of the camera and light source locations.

More information:

21 November 2013

Brain's Crowdsourcing Software

Over the past decade, popular science has been suffering from neuromania. The enthusiasm came from studies showing that particular areas of the brain ‘light up’ when you have certain thoughts and experiences. It's mystifying why so many people thought this explained the mind. What have you learned when you say that someone's visual areas light up when they see things? People still seem to be astonished at the very idea that the brain is responsible for the mind—a bunch of gray goo makes us see! It is astonishing. But scientists knew that a century ago; the really interesting question now is how the gray goo lets us see, think and act intelligently. New techniques are letting scientists understand the brain as a complex, dynamic, computational system, not just a collection of individual bits of meat associated with individual experiences. These new studies come much closer to answering the ‘how’ question. Fifty years ago researchers made a great Nobel Prize-winning discovery. They recorded the signals from particular neurons in cats' brains as the animals looked at different patterns. The neurons responded selectively to some images rather than others. One neuron might only respond to lines that slanted right, another only to those slanting left. But many neurons don't respond in this neatly selective way. This is especially true for the neurons in the parts of the brain that are associated with complex cognition and problem-solving, like the prefrontal cortex. Instead, these cells were a mysterious mess—they respond idiosyncratically to different complex collections of features. What were these neurons doing?


In a new study researchers at Columbia University College and the Massachusetts Institute of Technology taught monkeys to remember and respond to one shape rather than another while they recorded their brain activity. But instead of just looking at one neuron at a time, they recorded the activity of many prefrontal neurons at once. A number of them showed weird, messy ‘mixed selectivity’ patterns. One neuron might respond when the monkey remembered just one shape or only when it recognized the shape but not when it recalled it, while a neighboring cell showed a different pattern. To analyze how the whole group of cells worked the researchers turned to the techniques of computer scientists who are trying to design machines that can learn. Computers aren't made of carbon, of course, let alone neurons. But they have to solve some of the same problems, like identifying and remembering patterns. The techniques that work best for computers turn out to be remarkably similar to the techniques that brains use. Essentially, they found the brain was using the same general sort of technique that Google uses for its search algorithm. You might think that the best way to rank search results would be to pick out a few features of each Web page like ‘relevance’ or ‘trustworthiness’. With neurons that detect just a few features, you can capture those features and combinations of features, but not much more. To capture more complex patterns, the brain does better by amalgamating and integrating information from many different neurons with very different response patterns.

More information:

18 November 2013

Computational Creativity

IBM has built a computational creativity machine that creates entirely new and useful stuff from its knowledge of existing stuff. But can computers be creative? That’s a question likely to generate controversial answers. It also raises and some important issues too, like how to define creativity. Seemingly unafraid of the controversy, IBM has darted into the fray by answering this poser with a resounding ‘yes’. Computers can be creative, they say, and to prove it they have built a computational creativity machine that produces results that a knowledgeable human would consider novel, useful and even valuable—the hallmarks of genuine creativity. IBM’s chosen field for this endeavour is cooking. The company’s creativity machine produces recipes based on chosen ingredients or cooking styles. And they’ve asked professional chefs to evaluate the results and say the feedback is promising. Computational machines have evolved a great deal since they were first used in war for code-cracking and gun-aiming and in business for storing, tabulating and processing data. But it has taken some time for these machines to match man human capabilities. In 1997, for instance, IBM’s Deep Blue machine used deductive reasoning to beat the world chess champion for the first time. It’s successor, a computer called Watson, went a step further in 2011 by applying inductive reasoning to huge datasets to beat humans experts on the TV game show, Jeopardy!.
Their first problem of course is to define creativity. The choice of problem, to create new recipes, is clearly a human decision. The team has then gathered information by downloading a large corpus of recipes that include dishes from all over the world that use a wide variety ingredients, combinations of flavours, serving suggestions and so on. They also download related information such as descriptions of regional cuisines from Wikipedia, the concentration of flavour ingredients in different foodstuffs from the ‘Volatile Compounds in Food’ database and Fenaroli’s Handbook of Flavor Ingredients. So big data lies at the heart of this approach. They then develop a method for combining ingredients in ways that have never been attempted using a ‘novelty algorithm’ that determines how surprising the resulting recipe will appear to an expert observer. This relies on factors such as ‘flavour pleasantness’. The computer assesses this using a training set of flavours that people find pleasant as well as the molecular properties of the food that produce these flavours such as its surface area, heavy atom count, complexity, rotatable bond count, hydrogen bond acceptor count and so on. The last stage is an interface that allows a human expert to enter some starting ingredients such as pork belly or salmon fillet and perhaps a choice of cuisine. The computer generates a number of novel dishes, explaining its reasoning for each. Of these, the expert chooses one and then makes it.

More information:

16 November 2013

Human Touch Makes Robots Defter

Cornell engineers are helping humans and robots work together to find the best way to do a job, an approach called ‘coactive learning’. Modern industrial robots, like those on automobile assembly lines, have no brains, just memory. An operator programs the robot to move through the desired action; the robot can then repeat the exact same action every time a car goes by. But off the assembly line, things get complicated: A personal robot working in a home has to handle tomatoes more gently than canned goods. If it needs to pick up and use a sharp kitchen knife, it should be smart enough to keep the blade away from humans. Researchers set out to teach a robot to work on a supermarket checkout line, modifying a Baxter robot from Rethink Robotics in Boston, designed for assembly line work. It can be programmed by moving its arms through an action, but also offers a mode where a human can make adjustments while anaxctiinis in progress. The Baxter’s arms have two elbows and a rotating wrist, so it’s not always obvious to a human operator how best to move the arms to accomplish a particular task. So the researchers, drawing on previous work, added programming that lets the robot plan its own motions. It displays three possible trajectories on a touch screen where the operator can select the one that looks best. 


Then humans can give corrective feedback. As the robot executes its movements, the operator can intervene, guiding the arms to fine-tune the trajectory. The robot has what the researchers’ call a ‘zero-G’ mode, where the robot's arms hold their position against gravity but allow the operator to move them. The first correction may not be the best one, but it may be slightly better. The learning algorithm the researchers provided allows the robot to learn incrementally, refining its trajectory a little more each time the human operator makes adjustments. Even with weak but incrementally correct feedback from the user, the robot arrives at an optimal movement. The robot learns to associate a particular trajectory with each type of object. A quick flip over might be the fastest way to move a cereal box, but that wouldn’t work with a carton of eggs. Also, since eggs are fragile, the robot is taught that they shouldn’t be lifted far above the counter. Likewise, the robot learns that sharp objects shouldn’t be moved in a wide swing; they are held in close, away from people. In tests with users who were not part of the research team, most users were able to train the robot successfully on a particular task with just five corrective feedbacks. The robots also were able to generalize what they learned, adjusting when the object, the environment or both were changed.

More information:

15 November 2013

Holograms Set for Greatness

A new technique that combines optical plates to manipulate laser light improves the quality of holograms. Holography makes use of the peculiar properties of laser light to record and later recreate three-dimensional images, adding depth to conventionally flat pictures. Researchers at the A*STAR Data Storage Institute, Singapore, have now developed a method for increasing the number of pixels that constitute a hologram, thus enabling larger and more realistic three-dimensional (3D) images. Holographic imaging works by passing a laser beam through a plate on which an encoded pattern, known as a hologram, is stored or recorded. The laser light scatters from features on the plate in a way that gives the impression of a real three-dimensional object. With the help of a scanning mirror, the system built researchers combines 24 of these plates to generate a hologram consisting of 377.5 million pixels. A previous approach by a different team only managed to achieve approximately 100 million pixels. 


The researchers patterned the plates, made of a liquid-crystal material on a silicon substrate, with a computer-generated hologram. Each plate, also called a spatial light modulator (SLM), consisted of an array of 1,280 by 1,024 pixels. Simply stacking the plates to increase the total number of pixels, however, created ‘optical gaps’ between them. As a workaround, the researchers tiled 24 SLMs into an 8 by 3 array on two perpendicular mounting plates separated by an optical beam splitter. They then utilized a scanning mirror to direct the laser light from the combined SLM array to several predetermined positions. The team demonstrated that by shining green laser light onto this composite holographic plate, they could create 3D objects that replayed at a rate of 60 FPS in a 10 by 3-inch display window. This simple approach for increasing the pixel count of holograms should help researchers develop 3D holographic displays that are much more realistic than those commercially available.

More information:

12 November 2013

Gestural Interface for Smart Watches

If just thinking about using a tiny touch screen on a smart watch has your fingers cramping up, researchers at the University of California at Berkeley and Davis may soon offer some relief: they’re developing a tiny chip that uses ultrasound waves to detect a slew of gestures in three dimensions. The chip could be implanted in wearable gadgets.


The technology, called Chirp, is slated to be spun out into its own company, Chirp Microsystems, to produce the chips and sell them to hardware manufacturers. They hope that Chirp will eventually be used in everything from helmet cams to smart watches—basically any electronic device you want to control but don’t have a convenient way to do so.

More information:

11 November 2013

Monkeys Use Minds to Control Avatar Arms

Most of us don’t think twice when we extend our arms to hug a friend or push a shopping cart—our limbs work together seamlessly to follow our mental commands. For researchers designing brain-controlled prosthetic limbs for people, however, this coordinated arm movement is a daunting technical challenge. A new study showing that monkeys can move two virtual limbs with only their brain activity is a major step toward achieving that goal, scientists say.


The brain controls movement by sending electrical signals to our muscles through nerve cells. When limb-connecting nerve cells are damaged or a limb is amputated, the brain is still able to produce those motion-inducing signals, but the limb can't receive them or simply doesn’t exist. In recent years, scientists have worked to create devices called brain-machine interfaces (BMIs) that can pick up these interrupted electrical signals and control the movements of a computer cursor or a real or virtual prosthetic.

More information:

29 October 2013

Effective Motion Tracking Technology

Researchers at Carnegie Mellon University and Disney Research Pittsburgh have devised a motion tracking technology that could eliminate much of the annoying lag that occurs in existing video game systems that use motion tracking, while also being extremely precise and highly affordable. Called Lumitrack, the technology has two components -- projectors and sensors. A structured pattern, which looks something like a very large barcode, is projected over the area to be tracked. 


Sensor units, either near the projector or on the person or object being tracked, can then quickly and precisely locate movements anywhere in that area. Lumitrack also is extremely precise, with sub-millimeter accuracy. Moreover, this performance is achieved at low cost. The sensors require little power and would be inexpensive to assemble in volume. The components could even be integrated into mobile devices, such as smartphones.

More information:

21 October 2013

Automatic Speaker Tracking

A central topic in spoken-language-systems research is what’s called speaker diarization, or computationally determining how many speakers feature in a recording and which of them speaks when. Speaker diarization would be an essential function of any program that automatically annotated audio or video recordings. To date, the best diarization systems have used what’s called supervised machine learning: They’re trained on sample recordings that a human has indexed, indicating which speaker enters when. However, MIT researchers describe a new speaker-diarization system that achieves comparable results without supervision: No prior indexing is necessary.


Moreover, one of the MIT researchers’ innovations was a new, compact way to represent the differences between individual speakers’ voices, which could be of use in other spoken-language computational tasks. To create a sonic portrait of a single speaker, Glass explains, a computer system will generally have to analyze more than 2,000 different speech sounds; many of those may correspond to familiar consonants and vowels, but many may not. To characterize each of those sounds, the system might need about 60 variables, which describe properties such as the strength of the acoustic signal in different frequency bands.

More information:

20 October 2013

Kinect of the Future

Massachusetts Institute of Technology researchers have developed a device that can see through walls and pinpoint a person with incredible accuracy. They call it the ‘Kinect of the future’, after Microsoft's Xbox 360 motion sensing camera. The project from MIT's Computer Science and Artificial Laboratory (CSAIL) used three radio antennas spaced about a meter apart and pointed at a wall. A desk cluttered with wires and circuits generated and interpreted the radio waves. On the other side of the wall a single person walked around the room and the system represented that person as a red dot on a computer screen. The system tracked the movements with an accuracy of plus or minus 10 centimeters, which is about the width of an adult hand.


In the room where users walked around there was white tape on the floor in a circular design. The tape on the floor was also in the virtual representation of the room on the computer screen. It wasn't being used an aid to the technology, rather it showed onlookers just how accurate the system was. As testers walked on the floor design their actions were mirrored on the computer screen. One of the drawbacks of the system is that it can only track one moving person at a time and the area around the project needs to be completely free of movement. That meant that when the group wanted to test the system they would need to leave the room with the transmitters as well as the surrounding area; only the person being tracked could be nearby.

More information:

13 October 2013

The Human Brain Project

Six months after its selection by the EU as one of its FET Flagships, this project of unprecedented complexity, co-funded by the EU with an estimated budget of €1.2 billion, has now been set in motion. With more than 130 research institutions from Europe and around the world on board and hundreds of scientists in a myriad of fields participating, the Human Brain Project is the most ambitious neuroscience project ever launched. Its goal: develop methods that will enable a deep understanding of how the human brain operates. The knowledge gained will be a key element in developing new medical and information technologies. The Human Brain Project’s initial mission is to launch its six research platforms, each composed of technological tools and methods that ensure that the project’s objectives will be met. These are: neuroinformatics, brain simulation, high-performance computing, medical informatics, neuromorphic computing and neurorobotics. Over the next 30 months, scientists will set up and test the platforms. Then, starting in 2016, the platforms will be ready to use by Human Brain Project scientists as well as researchers from around the world. These resources — simulations, high-performance computing, neuromorphic hardware, databases — will be available on a competitive basis, in a manner similar to that of other major research infrastructures, such as the large telescopes used in astronomy. In the field of neuroscience, the researchers will have to manage an enormous amount of data — in particular, the data that are published in thousands of scientific articles every year. 


The mission of the neuroinformatics platform will be to extract the maximum amount of information possible from these sources and integrate it into a cartography that encompasses all the brain’s organizational levels, from the individual cell all the way up to the entire brain. This information will be used to develop the brain simulation platform. The high-performance computing platform must ultimately be capable of deploying the necessary computational power to bring these ambitious developments about. Medical doctors associated with the project are charged with developing the best possible methods for diagnosing neurological disease. Being able to detect and identify pathologies very rapidly will allow patients to benefit from personalized treatment before potentially irreversible neurological damage occurs. This is the mission of the medical informatics platform, which will initially concentrate on compiling and analyzing anonymized clinical data from hundreds of patients in collaboration with hospitals and pharmaceutical companies. The Human Brain Project includes an important component whose objective is to create neuro-inspired technologies. Microchips are being developed that imitate how networks of neurons function — the idea being to take advantage of the extraordinary learning ability and resiliency of neuronal circuits in a variety of specific applications. This is the mission of the neuromorphic computing platform.

More information:

10 October 2013

UltraHaptics

Multi-touch surfaces offer easy interaction in public spaces, with people being able to walk-up and use them.  However, people cannot feel what they have touched.  A team from the University of Bristol’s Interaction and Graphics (BIG) research group has developed a solution that not only allows people to feel what is on the screen, but also receive invisible information before they touch it. UltraHaptics, is a system designed to provide multipoint, mid-air haptic feedback above a touch surface. UltraHaptics uses the principle of acoustic radiation force where a phased array of ultrasonic transducers is used to exert forces on a target in mid-air.  Haptic sensations are projected through a screen and directly onto the user’s hands. The use of ultrasonic vibrations is a new technique for delivering tactile sensations to the user.  A series of ultrasonic transducers emit very high frequency sound waves.


When all of the sound waves meet at the same location at the same time, they create sensations on a human’s skin. By carrying out technical evaluations, the team has shown that the system is capable of creating individual points of feedback that are far beyond the perception threshold of the human hand.  The researchers have also established the necessary properties of a display surface that is transparent to 40 kHz ultrasound. The results from two user studies have demonstrated that feedback points with different tactile properties can be distinguished at smaller separations.  The researchers also found that users are able to identify different tactile properties with training. Finally, the research team explored three new areas of interaction possibilities that UltraHaptics can provide: mid-air gestures, tactile information layers and visually restricted displays, and created an application for each.

More information:

07 October 2013

Putting Face on Robots

A new study from the Georgia Institute of Technology finds that older and younger people have varying preferences about what they would want a personal robot to look like. And they change their minds based on what the robot is supposed to do. Participants were shown a series of photos portraying either robotic, human or mixed human-robot faces and were asked to select the one that they would prefer for their robot’s appearance. 


Most college-aged adults in the study preferred a robotic appearance, although they were also generally open to the others. However, nearly 60 percent of older adults said they would want a robot with a human face, and only 6 percent of them chose one with a mixed human-robot appearance. But the preferences in both age groups wavered a bit when participants were told the robot was assisting with personal care, chores, social interaction or for helping to make decisions.

More information:

06 October 2013

Smart Cities

An old port city on Spain's Bay of Biscay has emerged as a prototype for high-tech smart cities worldwide. Blanketed with sensors, it's changing the way its residents live. Apart from the occasional ferry from Britain, this picturesque town doesn't attract many foreign visitors. It turned quite a few heads, then, when delegations from Google, Microsoft and the Japanese government all landed there recently to walk the city streets.


What they've been coming to see, though, is mostly invisible: 12,000 sensors buried under the asphalt, affixed to street lamps and atop city buses. Silently they survey parking availability, and whether the surf's up at local beaches. They can even tell garbage collectors which bins are full, and automatically dim street lights when no one's around. Santander is one of four cities - the three others are in Britain, Germany and Serbia - where sensors are being tested.

More information:

04 October 2013

Self-Assembling Robots

Known as M-Blocks, the robots are cubes with no external moving parts. Nonetheless, they’re able to climb over and around one another, leap through the air, roll across the ground, and even move while suspended upside down from metallic surfaces. Inside each M-Block is a flywheel that can reach speeds of 20,000 revolutions per minute; when the flywheel is braked, it imparts its angular momentum to the cube. On each edge of an M-Block, and on every face, are cleverly arranged permanent magnets that allow any two cubes to attach to each other. Researchers studying reconfigurable robots have long used an abstraction called the sliding-cube model. In this model, if two cubes are face to face, one of them can slide up the side of the other and without changing orientation, slide across its top. The sliding-cube model simplifies the development of self-assembly algorithms, but the robots that implement them tend to be much more complex devices. To compensate for its static instability, the researchers’ robot relies on some ingenious engineering. On each edge of a cube are two cylindrical magnets, mounted like rolling pins. 


When two cubes approach each other, the magnets naturally rotate, so that north poles align with south, and vice versa. Any face of any cube can thus attach to any face of any other. The cubes’ edges are also beveled, so when two cubes are face to face, there’s a slight gap between their magnets. When one cube begins to flip on top of another, the bevels, and thus the magnets, touch. The connection between the cubes becomes much stronger, anchoring the pivot. On each face of a cube are four more pairs of smaller magnets, arranged symmetrically, which help snap a moving cube into place when it lands on top of another. But the researchers believe that a more refined version of their system could prove useful even at something like its current scale. Armies of mobile cubes could temporarily repair bridges or buildings during emergencies, or raise and reconfigure scaffolding for building projects. They could assemble into different types of furniture or heavy equipment as needed. And they could swarm into environments hostile or inaccessible to humans, diagnose problems, and reorganize themselves to provide solutions.

More information:

29 September 2013

Human Robot Getting Closer

A robot that feels, sees and, in particular, thinks and learns like us. UT researchers want to implement the cognitive process of the human brain in robots. The research should lead to the arrival of the latest version of the iCub robot in Twente. This human robot (humanoid) blurs the boundaries between robot and human. 


Decades of scientific research into cognitive psychology and the brain have given us knowledge about language, memory, motor skills and perception. We can now use that knowledge in robots, but this research goes even further. The application of cognition in technical systems should also mean that the robot learns from its experiences and the actions it performs.

More information:

28 September 2013

Virtualizer

Head-mounted devices, which display three dimensional images according one’s viewing direction, allowing the users to lose themselves in computer generated worlds are already commercially available. However, it has not yet been possible to walk through these virtual realities, without at some point running into the very real walls of the room. A team of researchers at the Vienna University of Technology has now built a ‘Virtualizer’, which allows for an almost natural walk through virtual spaces. The user is fixated with a belt in a support frame, the feet glide across a low friction surface. Sensors pick up these movements and feed the data into the computer. The team hopes that the Virtualizer will enter the market in 2014. Various ideas have been put forward on the digitalization of human motion. Markers can be attached to the body, which are then tracked with cameras – this is how motion capture for animated movies is achieved. For this, however, expensive equipment is needed, and the user is confined to a relatively small space. 


Prototypes using conveyor belts have not yet yielded satisfactory results. In the Virtualizer’s metal frame, the user is kept in place with a belt. The smooth floor plate contains sensors, picking up every step. Rotations of the body are registered by the belt. The Virtualizer can be used with standard 3D headgear, which picks up the users viewing direction and displays 3D pictures accordingly. This is independent from the leg motion, therefore running into one direction and looking into another becomes possible. Moving through virtual realities using a keyboard or a joystick can lead to a discrepancy between visual perception and other body sensations. The prototype developed at TU Vienna already works very well – only some minor adjustments are still to be made. The Virtualizer has already caused some a stir. The Virtualizer is scheduled to enter the market as soon as 2014. The price cannot be determined yet. The product should lead virtual reality out of the research labs and into the gamers’ living rooms.

More information:

26 September 2013

Interior 3D Map of Pisa

Developed by the CSIRO, Australia's national science agency, the Zebedee technology is a handheld 3D mapping system incorporating a laser scanner that sways on a spring to capture millions of detailed measurements of a site as fast as an operator can walk through it. Specialised software then converts the system's laser data into a detailed 3D map.


While the tower's cramped stairs and complex architecture have prevented previous mapping technologies from capturing its interior, Zebedee has enabled the researchers to finally create the first comprehensive 3D map of the entire building. Within 20 minutes researchers were able to use Zebedee to complete an entire scan of the building’s interior.

More information:

24 September 2013

Exoskeletons Are Here

Although literally conceived as a motorized suit of armor reminiscent of medieval knights, it has come to represent a true technological-biological fusion as the most complicated neuroprosthetic ever imagined. The breadth and scope of sci-fi exoskeletal armor is nicely captured in the sweeping and grand scene near the end of the 2013 Marvel Studios production ‘Iron Man 3’. When scientists want to produce a movement, complex commands related to motor planning and organization send signals to the motor output areas of the brain. These commands then travel down the spinal cord to the appropriate level. That is, higher up for arm movements and lower down for legs. At the spinal cord level the cells controlling the muscles that need to be activated are found. From the spinal cord the commands go to the muscles needed to produce the movement. All of this relaying takes time and introduces control delays that would make armored superhero fights difficult.


Because of these delays, the ultimate objective should be to create neuroprosthetics controlled by brain commands. This reduces all the transmission delays found in using commands downstream in the spinal cord or at the muscle level. But it also currently requires inserting electrodes into the nervous system. Instead, a good starting point for now is to use the commands from the brain that are relayed and detected as electrical activity (electromyography, EMG) in muscle. These EMG signals can be detected quite readily with electrodes placed on the skin over the muscles of interest. The EMG activity is a pretty faithful proxy for what your nervous system is trying to get your muscles to do. It’s kind of like a biological form of ‘wire tapping’ to ‘listen’ in to the commands sent to muscle. Many different neuroprosthetics have been developed to use EMG control signals in order to guide the activity of the motors in the prosthetic itself.

More information:

23 September 2013

VS-Games 2013 Short Paper

On Friday 13th September, I have presented a paper I co-authored with my students Athanasios Vourvopoulos and Alina Ene as well as a colleague from the SGI, Dr. Panagiotis Petridis with title ‘Assessing Brain-Computer Interfaces for Controlling Serious Games’. The paper was presented at the 5th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games 2013), at Bournemouth, UK, 11-13 September, 2013.


The paper examined how to fully interact with serious games in noisy environments using only non-invasive EEG-based information. Two different EEG-based BCI devices were used and results indicated that although BCI devices are still in their infancy, they offer the potential of being used as alternative game interfaces prior to some familiarisation with the device and in several cases a certain degree of calibration.

A draft version of the paper can be downloaded from here.

20 September 2013

VS-Games 2013 Full Paper

On Thursday 12th September, I have presented a co-authored paper with my PhD student Stuart O'Connor and Dr. Christopher Peters with title ‘A Study into Gauging the Perceived Realism of Agent Crowd Behaviour within a Virtual Urban City’. The paper was presented at the 5th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games 2013), at Bournemouth, UK, 11-13 September, 2013.


The paper examined the development of a crowd simulation in a virtual city, and a perceptual experiment to identify features of behaviour which can be linked to perceived realism. The perceptual experimentation methodologies presented can be adapted and potentially utilised to test other types of crowd simulation, for application within computer games or more specific simulations such as for urban planning or health and safety purposes.

A draft version of the paper can be downloaded from here.

15 September 2013

Cloth Imaging

Creating a computer graphic model of a uniform material like woven cloth or finished wood can be done by modeling a small volume, like one yarn crossing, and repeating it over and over, perhaps with minor modifications for color or brightness. But the final rendering step, where the computer creates an image of the model, can require far too much calculating for practical use. Cornell graphics researchers have extended the idea of repetition to make the calculation much simpler and faster. Rendering an image of a patterned silk tablecloth the old way took 404 hours of calculation. The new method cut the time to about one-seventh of that, and with thicker fabrics, computing was speeded up 10 or 12 times. A computer graphic image begins with a 3D model of the object’s surface. To render an image, the computer must calculate the path of light rays as they are reflected from the surface. Cloth is particularly complicated because light penetrates into the surface and scatters a bit before emerging and traveling to the eye. It’s the pattern of this scattering that creates different highlights on silk, wool or felt. They previously used high-resolution CT scans of real fabric to guide them in building micron-resolution models. 


Brute-force rendering computes the path of light through every block individually, adjusting at each step for the fact that blocks of different color and brightness will have different scattering patterns. The new method pre-computes the patterns of a set of example blocks – anywhere from two dozen to more than 100 – representing the various possibilities. These become a database the computer can consult as it processes each block of the full image. For each type of block, the pre-computation shows how light will travel inside the block and pass through the sides to adjacent blocks. In tests, the researchers first rendered images of plain-colored fabrics, showing that the results compared favorably in appearance with the old brute-force method. Then they produced images of patterned tablecloths and pillows. Patterned fabrics require larger databases of example blocks, but the researchers noted that once the database is computed, it can be re-used for numerous different patterns. The method could be employed on other materials besides cloth, the researchers noted, as long as the surface can be represented by a small number of example blocks. They demonstrated with images of finished wood and a coral-like structure.

More information:

09 September 2013

Touch Goes Digital

Researchers at the University of California, San Diego report a breakthrough in technology that could pave the way for digital systems to record, store, edit and replay information in a dimension that goes beyond what we can see or hear touch. Touch was largely bypassed by the digital revolution, because it seemed too difficult to replicate what analog haptic devices can produce.


In addition to uses in health and medicine, the communication of touch signals could have far-reaching implications for education, social networking, e-commerce, robotics, gaming, and military applications, among others. The sensors and sensor arrays reported in the research are also fully transparent which makes it particularly interesting for touch-screen applications in mobile devices.

More information:

28 August 2013

MasIE 2013 Paper

Last month, Dr. Stella Sylaiou presented a co-authored paper with title ‘Exploring the effect of diverse technologies incorporated in virtual museums on visitors’ perceived sense of presence’. The paper was presented at the International Workshop on Museums as Intelligent Environments (MasIE), in conjuction with the 9th International Conference on Intelligent Environments - IE'13 in Athens, Greece.


The paper presented the preliminary results of a research project aimed at exploring the perceived sense of presence incorporated in diverse technologies used in Virtual Museums. The results of a double-phased statistical analysis were discussed, which investigated the technological conditions under which a Virtual Museum enhances the visitors’ experience of presence.

A draft version of the paper can be downloaded from here.

24 August 2013

Locating the Brain's GPS

Using direct human brain recordings, a research team from Drexel University, the University of Pennsylvania, UCLA and Thomas Jefferson University has identified a new type of cell in the brain that helps people to keep track of their relative location while navigating an unfamiliar environment. The grid cell, which derives its name from the triangular grid pattern in which the cell activates during navigation, is distinct among brain cells because its activation represents multiple spatial locations. This behavior is how grid cells allow the brain to keep track of navigational cues such as how far you are from a starting point or your last turn. This type of navigation is called path integration. It is critical that this grid pattern is so consistent because it shows how people can keep track of their location even in new environments with inconsistent layouts researchers mentioned in Drexel's School of Biomedical Engineering, Science and Health Systems. Researchers were able to discern these cells because they had the rare opportunity to study brain recordings of epilepsy patients with electrodes implanted deep inside their brains as part of their treatment. Their work is being published in the latest edition of Nature Neuroscience. 


During brain recording, the 14 study participants played a video game that challenged them to navigate from one point to another to retrieve objects and then recall how to get back to the places where each object was located. The participants used a joystick to ride a virtual bicycle across a wide-open terrain displayed on a laptop by their hospital beds. After participants made trial runs where each of the objects was visible in the distance, they were put back at the center of the map and the objects were made invisible until the bicycle was right in front of them. The researchers then asked the participants to travel to particular objects in different sequences. The team studied the relation between how the participants navigated in the video game and the activity of individual neurons. Each grid cell responds at multiple spatial locations that are arranged in the shape of a grid. This triangular grid pattern thus appears to be a brain pattern that plays a fundamental role in navigation. Without grid cells, it is likely that humans would frequently get lost or have to navigate based only on landmarks. Grid cells are thus critical for maintaining a sense of location in an environment.

More information:

22 August 2013

Groovy Hologram

Applied physicists at the Harvard School of Engineering and Applied Sciences (SEAS) have demonstrated that they can change the intensity, phase, and polarization of light rays using a hologram-like design decorated with nanoscale structures. As a proof of principle, the researchers have used it to create an unusual state of light called a radially polarized beam, which -- because it can be focused very tightly -- is important for applications like high-resolution lithography and for trapping and manipulating tiny particles like viruses.


This is the first time a single, simple device has been designed to control these three major properties of light at once. Using these novel nanostructured holograms, they have converted conventional, circularly polarized laser light into radially polarized beams at wavelengths spanning the technologically important visible and near-infrared light spectrum. Holograms find many applications in security, like the holographic panels on credit cards and passports, and new digital hologram-based data-storage methods are currently being designed to potentially replace current systems.

More information: