30 December 2013

BCI Pong Game

Few video games are more basic than Pong, but Cornell University researchers built a custom electroencephalography (EEG) device so they could control the game's on-screen paddle with their minds. The alpha waves that EEG machines read are faint electrical signals. 

 
They ran the EEG readings through an amplification circuit to filter and boost the signals. Spiking alpha waves produced during relaxation move a player's paddle up, and smaller waves, indicating concentration, move it down. The size of the waves determines how much the paddle moves.

More information:

27 December 2013

Never Forget A Face

Do you have a forgettable face? Many of us go to great lengths to make our faces more memorable, using makeup and hairstyles to give ourselves a more distinctive look. Now your face could be instantly transformed into a more memorable one without the need for an expensive makeover, thanks to an algorithm developed by researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The algorithm, which makes subtle changes to various points on the face to make it more memorable without changing a person’s overall appearance, was unveiled earlier this month at the International Conference on Computer Vision in Sydney. The system could ultimately be used in a smartphone app to allow people to modify a digital image of their face before uploading it to their social networking pages. It could also be used for job applications, to create a digital version of an applicant’s face that will more readily stick in the minds of potential employers. Conversely, it could also be used to make faces appear less memorable, so that actors in the background of a television program or film do not distract viewers’ attention from the main actors, for example. To develop the memorability algorithm, the team first fed the software a database of more than 2,000 images.
 

Each of these images had been awarded a ‘memorability score’, based on the ability of human volunteers to remember the pictures. In this way the software was able to analyze the information to detect subtle trends in the features of these faces that made them more or less memorable to people. The researchers then programmed the algorithm to make the face as memorable as possible, but without changing the identity of the person or altering their facial attributes, such as their age, gender, or overall attractiveness. Changing the width of a nose may make a face look much more distinctive, for example, but it could also completely alter how attractive the person is, and so would fail to meet the algorithm’s objectives. When the system has a new face to modify, it first takes the image and generates thousands of copies. Each of these copies contains tiny modifications to different parts of the face. The algorithm then analyzes how well each of these samples meets its objectives. Once the algorithm finds a copy that succeeds in making the face look more memorable without significantly altering the person’s appearance, it makes yet more copies of this new image, with each containing further alterations. It then keeps repeating this process until it finds a version that best meets its objectives.

More information:

22 December 2013

Meta Augmented Reality Glasses

Meta Augmented Reality Glasses are now available for pre-order on site Kickstarter. A wearable computing device that combines a dual screen 3D augmented reality display with super-low latency gestural input, this technology allows for full mapping of the user’s environment and control of the augmented reality display. Meta is an amalgamation of a Glass type user interface with Xbox Kinect type spatial tracking. This combination, while not unobtrusive, allows the wearer to use her hands to interact with virtual objects layered over reality in real-time. While the first generation of Meta glasses are presented as a useable developer kit for programmers and early-adopter technophiles, the concept Meta 2 shrinks the cameras to negligible size resulting in a wearable computing augmented reality glasses kit that is roughly the same size as present day Google Glass.


At this time, only the Windows platform is compatible with the Meta augmented reality glasses. However, the company assures us that other platforms, such as OSX and Linux, are currently in development. The Meta glasses include two individual cameras projecting at a respectable resolution of 960×540 for each eye. For comparison, Google Glass single eye resolution is listed at 640×360. And, unlike Google Glass, which simply provides a data filled pop up in the corner of one eye, the Meta immerses the user in 46 degrees (23 degrees for each eye) of augmented reality and virtual objects. The current Meta glasses developer’s kit is tethered and requires a wired connection to a Windows computer. However, the Meta 2 consumer version is expected to be wireless, a necessity for commercial success we believe.

More information:

18 December 2013

Leaner Fourier Transforms

The fast Fourier transform (FFT), one of the most important algorithms of the 20th century, revolutionized signal processing. The algorithm allowed computers to quickly perform Fourier transforms (fundamental operations that separate signals into their individual frequencies) leading to developments in audio and video engineering and digital data compression. But ever since its development in the 1960s, computer scientists have been searching for an algorithm to better it.


Last year MIT researchers did just that, unveiling an algorithm that in some circumstances can perform Fourier transforms hundreds of times more quickly than the FFT. Recently, researchers within the Computer Science and Artificial Intelligence Laboratory (CSAIL), have gone a step further, significantly reducing the number of samples that must be taken from a given signal in order to perform a Fourier transform operation.

More information:

17 December 2013

New WAVE Display Technology

The University of California, San Diego’s new WAVE display, true to its name, is shaped like an ocean wave, with a curved wall array of 35 55” LG commercial LCD monitors that end in a ‘crest’ above the viewer’s head and a trough at his or her feet. The WAVE (Wide-Angle Virtual Environment), a 5x7 array of HDTVs, is now 20’ long by nearly 12’ high. Under the leadership of researchers at the UC San Diego division of the California Institute for Telecommunications and Information Technology (Calit2) – known as the Qualcomm Institute (QI) – high-resolution computerized displays have evolved over the past decade from 2D to 3D panels and from one monitor to arrays of many monitors. They’ve transitioned from stationary structures to structures on wheels, and from thick bezels (the rim that holds the glass display) to ultra-narrow bezels. Such technology is now widely used in television newsrooms, airports and even retail stores, but not in 3D like the WAVE.

 
The WAVE was designed as part of the SCOPE project, or Scalable Omnipresent Environment, that serves as both a microscope and telescope and enables users to explore data from the nano to micro to macro to mega scale. Earlier projector-based technologies, such as the QI StarCAVE, provide the feeling of being surrounded by an image and make it possible to ‘walk through’ a model of a protein or a building, for example, but the StarCAVE requires a huge room, and is not movable or replicable. By contrast, the WAVE can be erected against a standing wall and can be moved and repliciated. WAVE content can be clearly viewed by 20 or more people at once, not possible in earlier immersive displays at UCSD. Its curved aluminum structure, is also a technical ‘fix’ for the problem of images on 3D passively polarized screens appearing as double images when placed in a large, flat array. With a curved array, the viewer can stand anywhere in front of the WAVE and experience excellent 3D with no visual distortion.

More information:

08 December 2013

Tongue Navigation System

Researchers proposed a wearable system that allows paralyzed people to navigate their worlds with just flicks of their pierced tongues. The technology, still under development, could help patients disabled from the neck down access their worlds with far greater ease and access than current assistive systems offer – and with a tongue piercing, to boot. The Tongue Drive System (TDS) works like this: a magnetic tongue stud relays the wearer’s tongue movements to a headset, which then sends the commands to a smartphone or another WiFi-connected device. The user can control almost anything that a smartphone can – and a smartphone can do a lot, including drive a wheelchair, surf the web, and adjust the thermostat.


TDS is just one of a new crop of innovative assistive technologies for paralyzed patients, along with equipment that tracks eye movements, responds to voice commands, or follows neck movements. Still, these systems have distinct limitations: the neck can tire from prolonged use, background noise muddles voice commands, and eye-tracking headsets are cumbersome. Electrodes implanted in the brain have produced some good results, but they require brain surgery. In their lab tests, researchers compared TDS to one popular assistive system known as sip-and-puff. Users of that system sip or puff air into a straw connected to their wheelchair. The airflow relays commands that move the chair either forward or backward, or to either side.

More information:

07 December 2013

The Social Robot

An increasingly important part of daily life is dealing with so-called user interfaces. Whether it's a smartphone or an airport check-in system, the user's ability to get what they want out of the machine relies on their own adaptability to unfamiliar interfaces. But what if you could simply talk to a machine the way you talk to a human being? And what if the machine could also ask you questions, or even address two different people at once? These kinds of interactive abilities are being developed at KTH Royal Institute of Technology with the help of an award-winning robotic head that takes its name from the fur hat it wears. With a computer-generated, animated face that is rear-projected on a 3D mask, Furhat is actually a platform for testing various interactive technologies, such as speech synthesis, speech recognition and eye-tracking. 


The robot can conduct conversations with multiple people, turning its head and looking each person straight in the eye, while moving its animated lips in synch with its words. The project represents the third generation of spoken dialogue systems that has been in development at KTH's Department for Speech, Music and Hearing during the last 15 years. The Furhat team aims to develop its technology for commercial use, with the help of funding from Sweden's Vinnova, a government agency that supports innovation projects. Furhat is becoming a popular research platform for scientists around the world who study human interaction with machines. It's very simple, it's potentially very cheap to make, and people want to use it in their own research areas. Furhat also has attracted attention from researchers at Microsoft and Disney.

More information:

03 December 2013

Personalised Virtual Birth Simulator

Computer scientists from the University of East Anglia are working to create a virtual birthing simulator that will help doctors and midwives prepare for unusual or dangerous births. The new programme will take into account factors such as the shape of the mother’s body and the positioning of the baby to provide patient-specific birth predictions.


The simulation software will see ultra-sound data used to re-create a geometric model of a baby’s skull and body in 3D graphics as well as the mother’s body and pelvis. Programmers are also taking into account the force from the mother pushing during labour and are even modelling a ‘virtual’ midwife’s hands which can interact with the baby’s head.

More information: