29 April 2014

WearScript - Javascript to Google Glass

WearScript is a Javascript environment that runs on Google Glass and was developed by MIT researchers. The category of wearables is still evolving. Besides activity trackers and smartwatches, the killer wearable app is yet to be discovered because wearables don’t have the lean back or lean forward human-machine interface (HMI) of tablets and smartphones. Wearscript lets developers experiment with new user interface (UI) concepts and input devices to push beyond the HMI limits of wearables. The overblown reports of Google Glass privacy distract from the really important Google Glass discussion - how Glass micro apps can compress the time between user intent and action. 


Micro apps are smaller than apps and are ephemeral because they are used in an instant and designed to disappear from the user's perception once completing their tasks. Because of the Glass wearable form factor, micro apps deviate from the LCD square and touchscreen/keyboard design of smartphone, tablet, and PC apps, and are intended to be hands-free and responsive in the moment. Well-designed Glass apps employ its UI to let the user do something that they could not otherwise do with another device. Glass’s notifications are a good example of this. The best consumer-facing Google Glass experiences highlight how apps can leverage this micro app programmable wearable form factor.

More information:

28 April 2014

Simulating Human Brain

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more.


The Georgia Tech roadmap proposes a solution based on analogue computing techniques, which require far less electrical power than traditional digital computing. The more efficient analogue approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

More information:

27 April 2014

Controlling Brain Waves to Improve Vision

By using a novel technique to test brain waves, researchers are discovering how the brain processes external stimuli that do and don't reach our awareness. The researchers used both electroencephalography (EEG) and the event-related optical signal (EROS), developed in the Cognitive Neuroimaging Laboratory. While EEG records the electrical activity along the scalp, EROS uses infrared light passed through optical fibers to measure changes in optical properties in the active areas of the cerebral cortex. Because of the hard skull between the EEG sensors and the brain, it can be difficult to find exactly where signals are produced. EROS, which examines how light is scattered, can noninvasively pinpoint activity within the brain. EROS is based on near-infrared light and exploits the fact that when neurons are active, they swell a little, becoming slightly more transparent to light. 


This allowed the researchers to not only measure activity in the brain, but also allowed them to map where the alpha oscillations were originating. Their discovery: the alpha waves are produced in the cuneus, located in the part of the brain that processes visual information. The alpha can inhibit what is processed visually, making it hard for you to see something unexpected. By focusing your attention and concentrating more fully on what you are experiencing, however, the executive function of the brain can come into play and provide ‘top-down’ control -- putting a brake on the alpha waves, thus allowing you to see things that you might have missed in a more relaxed state. They found that the same brain regions known to control our attention are involved in suppressing the alpha waves and improving our ability to detect hard-to-see targets.

More information:

26 April 2014

Auto-Pilot Software

The Defence Advanced Research Projects Agency (DARPA) will in May detail a new program called Aircrew Labor In-Cockpit Automation System (ALIAS) that would build upon what the agency called the considerable advances that have been made in aircraft automation systems over the past 50 years, as well as the advances made in remotely piloted aircraft automation, to help reduce pilot workload, augment mission performance and improve aircraft safety. Airliners and military aircraft in particular have evolved over a period of decades to have ever more automated capabilities, improving mission success and safety. Easy-to-use touch and voice interfaces could enable supervisor-ALIAS interaction.


These aircraft still present challenging and complex interfaces to operators, and operators can experience extreme workload during emergencies and other unexpected situations. Avionics and software upgrades can help, but can cost tens of millions of dollars per aircraft, which limits the rate of developing, testing and fielding new automation capabilities for those aircraft. As an automation system, ALIAS would execute a planned mission from takeoff to landing, even in the face of contingency events such as aircraft system failures. The ALIAS system would include persistent state monitoring and rapid procedure recall and would provide a potential means to further enhance flight safety.

More information:

19 April 2014

Improving Human-Robot Connection

Researchers are programming robots to communicate with people using human-like body language and cues, an important step toward bringing robots into homes. Researchers at the University of British Columbia enlisted the help of a human-friendly robot named Charlie to study the simple task of handing an object to a person. Past research has shown that people have difficulty figuring out when to reach out and take an object from a robot because robots fail to provide appropriate nonverbal cues.


Researchers tested three variations of this interaction with Charlie and the 102 study participants. Programming the robot to use eye gaze as a non-verbal cue made the handover more fluid. Researchers found that people reached out to take the water bottle sooner in scenarios where the robot moved its head to look at the area where it would hand over the water bottle or looked to the handover location and then up at the person to make eye contact.

More information:

18 April 2014

Bringing History and the Future to Life

Have you ever wished you had a virtual time machine that could show you how your street looked last century? Or have you wanted to see how your new furniture might look, before you’ve even bought it? Thanks to an EU research project, you can now do just that. The project was designed to appeal to people familiar with the neighbourhood as well as those who are interested in Grenoble’s rich cultural heritage and human history.


Participants can use a tablet or Smartphone to look at the city through a virtual lens. The modern-day scene that they can see through their device’s camera is overlaid with historical photographs and 3D reconstructions of ancient buildings, allowing the users to look at their surroundings, going backwards through time. Local schoolchildren have collected photographs and memories from their past in order to preserve their memories for future generations.

More information:

17 April 2014

Orienteering for Robots

Suppose you’re trying to navigate an unfamiliar section of a big city, and you’re using a particular cluster of skyscrapers as a reference point. Traffic and one-way streets force you to take some odd turns, and for a while you lose sight of your landmarks. When they reappear, in order to use them for navigation, you have to be able to identify them as the same buildings you were tracking before — as well as your orientation relative to them. That type of re-identification is second nature for humans, but it’s difficult for computers. MIT researchers discovered a new algorithm that could make it much easier, by identifying the major orientations in 3D scenes. The same algorithm could also simplify the problem of scene understanding, one of the central challenges in computer vision research.


The algorithm is primarily intended to aid robots navigating unfamiliar buildings, not motorists navigating unfamiliar cities, but the principle is the same. It works by identifying the dominant orientations in a given scene, which it represents as sets of axes, called ‘Manhattan frames’, embedded in a sphere. As a robot moved, it would, in effect, observe the sphere rotating in the opposite direction, and could gauge its orientation relative to the axes. Whenever it wanted to reorient itself, it would know which of its landmarks’ faces should be toward it, making them much easier to identify. As it turns out, the same algorithm also drastically simplifies the problem of plane segmentation, or deciding which elements of a visual scene lie in which planes, at what depth.

More information:

11 April 2014

Google Glass For Parkinson's Patients

In Newcastle University, England, researchers are examining if Google Glass can help Parkinson’s patients monitor their symptoms and be more mobile. In one small study, researchers held workshops with patients with Parkinson’s disease and then let them use Google glass at home. Parkinson’s disease is a progressive neurological condition that results in a loss of motor control including rigidty, tremors and ‘bradykinesia’ or slowness of movement. The disease affects up to 10 million people, usually those over 50. Medication can help stop symptoms, but users have to be careful about timing their doses so they don’t risk side effects that can lead to exacerbated tremors.

Google Glass is like working a mobile phone with boxes gloves on. You can take a photograph and take a video and search the Internet. You can make a call and send a text. However, arguments people made about how Google Glass could invade privacy, were actually positive arguments for its use as an assistive device. One main focus on using the device would be to try and use it as a way to monitor symptoms. Small sensors in the computer could measure eye and head movement and alert users if they start to exhibit more symptoms so they can either take more medication or get to a safe place before more of their symptoms return and render them immobile.

More information: