31 August 2014

Bypass Commands from Brain to Legs

Gait disturbance in individuals with spinal cord injury is attributed to the interruption of neural pathways from brain to the spinal locomotor center, whereas neural circuits locate below and above the lesion maintain most of their functions. An artificial connection that bridges the lost pathway and connects brain to spinal circuits has potential to ameliorate the functional loss. A Japanese research group National Institutes of Natural Sciences (NINS) has successfully made an artificial connection from the brain to the locomotion center in the spinal cord by bypassing with a computer interface. This allowed subjects to stimulate the spinal locomotion center using volitionally-controlled muscle activity and to control walking in legs. Neural networks in the spinal cord, locomotion center are capable of producing rhythmic movements, such as swimming and walking, even when isolated from the brain. The brain controls spinal locomotion center by sending command to the spinal locomotion center to start, stop and change waking speed.

In most cases of spinal cord injury, the loss of this link from the brain to the locomotion center causes problems with walking. The research group came up with bypassing the functioning brain and locomotion center with the computer to compensate lost pathways as a way to enable individuals with spinal cord injury to regain walking ability. Since the arm movement associate with leg movement when we walk they used muscle activity of arm to sarogate the brain activity. The computer interface allowed subjects to control magnetic stimulator that drive to the spinal locomotion center non-invassively using volitionally-controlled muscle activity and to control walking in legs. As a results of experiments in people who are neurologically intact, the subjects were asked to make own legs relaxed and passively controlled via computer interface that was controlled by arm muscle, walking behavior in legs was induced and subjects could control the step cycle volitionally as well. However without bypassing with the computer interface, the legs did not move even if the arms muscle was volitionally activated.

More information:

27 August 2014

Intelligent Shopping Navigation System

An indoor navigation system is being developed to help improve people’s experiences of a range of businesses, including supermarkets, hospitals and leisure parks. Mobile app developer RNF Digital Innovation will use smartphones, tablets and iBeacons, following a £500,000 grant from the Technology Strategy Board, the UK’s Innovation Agency. A further £202,000 investment will come through RNF Digital Innovation and its collaborative project partners, the Bestway Group, plus the University of Lincoln, and Aston University, who will both provide technical and research support for the project.

The aim of the competitive fund is to support projects that capitalise on the increasing accuracy, coverage and speed of global navigation satellite systems (GNSS) such as GPS and other non-satellite technologies including Wi-Fi and iBeacon – which enables a smart phone or other device to perform actions when in close proximity. The technology will have applications for a range of sectors. For example in the retail sector, indoor navigation systems would enable the user to work out their quickest and most economical route at the supermarket alerting them to offers and product updates on the way. 

More information:

19 August 2014

Realistic Computer Graphics

Creating a realistic computer simulation of how light suffuses a room is crucial not just for animated movies. Special computing methods should ensure this, but they require great effort. Computer scientists from Saarbrücken have now developed a novel approach that turned out to be so promising, that it was adopted by companies in record time—among others by Pixar, well-known in the movie industry for its computer animation, and now a subsidiary of the Walt Disney Company. The realistic depiction of light transport in a room is important within the production of computer-generated movies. If it does not work, the 3D impression is rapidly lost. Hence, the movie industry's digital light experts use special computing methods, requiring enormous computational power and therefore raising production costs. Not only in the film industry, but also in the automobile industry, the companies invest to make lighting conditions for a computer generated image as realistic as possible. Already during the development process, entire computing centers are used to compute and display realistic pictures of the complex car models in real time. Only in this way, designers and engineers can evaluate the design and the product features in an early stage and optimize it during the planning phase.

With current computing methods, it has not been possible to compute all illumination effects in an efficient way. The so-called Monte Carlo Path Tracing could depict very well the direct light incidence on surfaces and the indirect illumination by reflecting light from surfaces in a room. But it does not work well for illumination around transparent objects, like semi-transparent shadows from glass objects, or illumination by specular surfaces. This, on the other hand, was the advantage of photon mapping. But this method again led to disappointing results for direct lighting of surfaces. But since these two approaches were mathematically incompatible, it was not possible to merge them, and therefore it was necessary to compute them separately from each other for the particular images. This raised the computation costs for computer-animated movies. Researchers developed a mathematical approach in 2012 that combines both methods with each other in a clever way. They reformulated photon mapping as a Monte Carlo process. Hence, they could integrate it directly into the Monte Carlo Path Tracing method. For every pixel of the image the new algorithm decides automatically, via so-called multiple importance sampling, which of both strategies is suited best to compute the illumination at that spot.

More information:

16 August 2014

Turn Sketches into 3D

A novel graphics system that can infer complex 3D shapes from single professional sketches was unveiled by UBC computer scientists. The solution has the potential to dramatically simplify how designers and artists develop new product ideas. Converting an idea into a 3D model using current commercial tools is a complicated and painstaking process. So UBC researchers developed True2Form, a software algorithm inspired by the work of professional designers, who effectively communicate ideas through simple drawings.

In line-drawings, designers and artists use descriptive curves and informative viewpoints to help viewers infer the complete shape of an object. The system mimics the results of human 3D shape inference to turn a sketch curve network into 3D, while preserving fidelity to the original sketch. True2Form uses mathematics to interpret the strokes that artists use in these drawings, automatically lifting drawings off the page. It produces convincing, complex 3D shapes computed from individual sketches, automatically corrected to account for inherent drawing inaccuracy.

More information:

12 August 2014

Sound for Indoor Localization

The global positioning system, or GPS, has its limitation (i.e. it cannot work indoors). Potential solutions for indoor positioning continue to fire up the imaginations of scientists. The latest news involves a form of echolocation. MIT Technology Review reported on the approach for indoor localization based on sound. Researchers at the University of California, Berkeley developed a simple, cheap mechanism that can identify rooms based on a relatively small dataset. Their method is based on the extraction of acoustic features of rooms. The team said they can acquire RIRs (room impulse responses) by using built-in speakers and microphones on laptops.

Also, a noise adaptive reverberation extraction algorithm was developed for feature extraction from the noisy RIRs. The researchers tested their system in ten rooms on the Berkeley campus. Data was taken using the built-in microphone and speakers on an ordinary laptop. The laptop produces a set of sound waves and then listens for the echo. They took 50 samples at each location, which included background noise such as footsteps, talking and heating and ventilation sounds. They processed this data to find the echo fingerprint for each room. The team said there was a 97.8 percent accuracy in identifying the individual rooms.

More information: