27 March 2014

Smartphones Create 3D Maps

A project at the University of Minnesota would allow a user to hold up a smartphone and create indoor maps in three dimensions. A $1.35 million grant from Google is helping to pay for it. The work is part of the company’s recently announced Project Tango, a cellphone optimized for 3D mapping.Homeowners could use the software to create a virtual tour of their houses before putting them up for sale. 

The software could also help the blind walk through a building or aid a drone aircraft in navigating. While the software is being designed to work on a prototype of the new Google smartphone, it will also work on existing smartphones. Notably, it doesn’t use much processing power, about as much as the game Angry Birds.

More information:

24 March 2014

Pocket Diagnosis

A recently-developed mobile phone application could make monitoring conditions such as diabetes, kidney disease, and urinary tract infections much clearer and easier for both patients and doctors, and could eventually be used to slow or limit the spread of pandemics in the developing world. The app, developed by researchers at the University of Cambridge, accurately measures colour-based, or colorimetric, tests for use in home, clinical or remote settings, and enables the transmission of medical data from patients directly to health professionals. Decentralisation of healthcare through low-cost and highly portable point-of-care diagnostics has the potential to revolutionise current limitations in patient screening. However, diagnosis can be hindered by inadequate infrastructure and shortages in skilled healthcare workers, particularly in the developing world. Overcoming such challenges by developing accessible diagnostics could reduce the burden of disease on health care workers. Due to their portability, compact size and ease of use, colorimetric tests are widely used for medical monitoring, drug testing and environmental analysis in a range of different settings throughout the world. The tests, typically in the form of small strips, work by producing colour change in a solution: the intensity of the colour which is produced determines the concentration of that solution. Especially when used in a home or remote setting however, these tests can be difficult to read accurately. False readings are very common, which can result in erroneous diagnosis or treatment. 

Specialised laboratory equipment such as spectrophotometers or test-specific readers can be used to automate the readouts with high sensitivity, however these are costly and bulky. The new app, Colorimetrix, makes accurate reading of colorimetric tests much easier, using nothing more than a mobile phone. The app uses the phone’s camera and an algorithm to convert data from colorimetric tests into a numerical concentration value on the phone’s screen within a few seconds. After testing urine, saliva or other bodily fluid with a colorimetric test, the user simply takes a picture of the test with their phone’s camera. The app analyses the colours of the test, compares them with a pre-recorded calibration, and displays a numerical result on the phone’s screen. The result can then be stored, sent to a healthcare professional, or directly analysed by the phone for diagnosis. The app can be used in home, clinical, or resource-limited settings, and is available for both Android and iOS operating systems. It has been shown to accurately report glucose, protein and pH concentrations from commercially-available urine test strips without requiring any external hardware, the first time that a mobile phone app has been used in this way in a laboratory setting. Details were recently published in the journal Sensors and Actuators B: Chemical. Beyond laboratory applications, the app could also be used by patients to monitor chronic conditions such as diabetes, or as a public health tool, by enabling the transmission of medical data to health professionals in real time.

More information:

23 March 2014

A Robot For Mars

Valkyrie stands more than six feet tall, weighs 286 pounds, and has an 80 inch wingspan. Designed at the NASA Johnson Space Center (JSC), Valkyrie competed in the DARPA Robotics Challenge (DRC) trial round in December 2013, with hopes of one day setting foot on Mars. Drawing on a heritage of building humanoid robots, the JSC team had to overcome many challenges throughout the process – primarily, dealing with gravity. 

To begin the design of this complex machine, the team started with paper sketches, designing a number of different configurations. However, because of the funding constraints and time restrictions (and a government furlough mid-project) the team didn’t prototype. Instead, Valkyrie was built up joint by joint. Working in parallel with the hardware team, the software was developed separately from the beginning, not phased with the physical robot until the end.

More information:

20 March 2014

Project Morpheus HMD

A few days ago, Sony Computer Entertainment announced Project Morpheus, a virtual reality headset that works with the company’s PlayStation 4 video game console. The headset will fool its wearer into believing they have entered a simulated 3D world and, potentially, bring the science-fiction dreams of the 1990s to the consumer market. Project Morpheus has been in development for the past three years and is still in prototype form. Sony made no mention of a launch date or price point and, as such, the product is unlikely to be the first to market. 

Oculus VR has raised $75m in venture funding in the past 12 months and has generated a groundswell of interest in the technology. VR’s influence and application will soon extend far beyond the video game industry. Allowing people to experience what it’s like to be somewhere else will impact many aspects of life. Sony is working with Nasa to allow users to experience what it’s like to stand on Mars by using real image data gathered from the Mars Rover. VR is going to be pervasive; it could even be used to pick out a hotel room for your next trip by visiting a virtual version of that room.

More information:

18 March 2014

Soft Robotic Fish Moves Realistically

MIT researchers report the first self-contained autonomous soft robot capable of rapid body motion: a fish that can execute an escape maneuver, convulsing its body to change direction in just a fraction of a second, or almost as quickly as a real fish can. With soft robots, collision poses little danger to either the robot or the environment. In some cases, it is actually advantageous for these robots to bump into the environment, because they can use these points of contact as means of getting to the destination faster. 

Each side of the fish’s tail is bored through with a long, tightly undulating channel. Carbon dioxide released from a canister in the fish’s abdomen causes the channel to inflate, bending the tail in the opposite direction. Each half of the fish tail has just two control parameters: the diameter of the nozzle that releases gas into the channel and the amount of time its left open. The fish can perform 20 or 30 escape maneuvers, depending on their velocity and angle, before it exhausts its carbon dioxide canister.

More information

16 March 2014

Seeing Brain Neurons

A study conducted by local high school students and faculty from the Department of Computer and Information Science in the School of Science at Indiana University-Purdue University Indianapolis reveals new information about the motor circuits of the brain that may one day help those developing therapies to treat conditions such as stroke, schizophrenia, spinal cord injury or Alzheimer's disease. MRI and CAT scans of the human brain can tell us many things about the structure of this most complicated of organs, formed of trillions of neurons and the synapses via which they communicate. But we are a long way away from having imaging techniques that can show single neurons in a complex brain like the human brain.

Using computer vision and image processing, researchers were able to visualize and process actual neurons of model organisms. Their work in the brain of a model organism will help researchers move forward to more complex organisms with the ultimate goal of reconstructing the human central nervous system to gain insight into what goes wrong at the cellular level when devastating disorders of the brain and spinal cord occur. This understanding may ultimately inform the treatment of these conditions. In this study, which processed images and reconstructed neuronal motor circuitry in the brain, researchers collected and analyzed data on minute structures over various developmental stages, efforts linking neuroscience and computer science.

More information:

04 March 2014

Creating Animated Characters Outdoors

So far, film studios have had to put in huge amounts of effort to set monsters, superheroes, fairies or other virtual characters into real feature film scenes. Within the so-called motion capturing process, real actors wear skintight suits with markers on them. These suits reflect infrared light that is emitted and captured by special cameras. Subsequent to this, the movements of the actors are rendered with the aid of software into animated characters. 

Researchers from the Max Planck Institute for Informatics in Saarbruecken, developed a method that works without markers. It immediately transfers actors' movements to the virtual characters in near real-time. They dealt with the task of transferring the movements of two actors at the same time into two animated characters. Moreover, the technique makes it possible to imitate entire tracking shots. The movements of one character can thus be more easily captured from every angle.

More information:

03 March 2014

Solving Serious Games

Here’s an imaginary scenario: you’re a law enforcement officer confronted with a 21-year-old male suspect who is accused of breaking into a private house on Sunday evening and stealing a laptop, jewelry, and some cash. Your job is to find out whether the suspect has an alibi and if so whether it is coherent and believable. That’s exactly the kind of scenario that police officers the world over face on a regular basis. But how do you train for such a situation? How do you learn the skills necessary to gather the right kind of information? An increasingly common way of doing this is with serious games, those designed primarily for purposes other than entertainment. In the last 10 years or so, medical, military, and commercial organizations all over the world began to experiment with game-based scenarios that are designed to teach people how to perform their jobs and tasks in realistic situations. But there is a problem with serious games which require realistic interaction with another person. It’s relatively straightforward to design one or two scenarios that are coherent, lifelike, and believable but it’s much harder to generate them continually on an ongoing basis.

Imagine in the example above that the suspect is a computer-generated character. What kind of activities could he describe that would serve as a believable, coherent alibi for Sunday evening? And how could he do it a thousand times, each describing a different realistic alibi. Therein lies the problem. Researchers at Bar-Ilan University in Israel, claim they’ve solved this problem. They have come up with a novel way of generating ordinary, realistic scenarios that can be cut and pasted into a serious game to serve exactly this purpose. The secret sauce in their new approach is to crowd-source the new scenarios from real people using Amazon’s Mechanical Turk service. The approach is straightforward. Researchers ask a set of questions asking what they did during each one-hour period throughout various days, offering bonuses to those who provide the most varied detail. They then analyze the answers, categorizing activities by factors such as the times they are performed, the age and sex of the person doing it, the number of people involved, and so on. This then allows a computer game to cut and paste activities into the action at appropriate times.

More information: