29 January 2009

Safely Fixed Hip Prostheses

Artificial hip joints are firmly anchored to the patient’s damaged bone by screws. But which parts of the bone will safely hold the screws in place? A simulation model is to calculate the strength of the bone from computer tomography images. Hip prostheses do not hold forever. If an implant comes loose, the doctors have to replace it. Most patients need this second operation after about 15 years. By then, the first prosthesis has often worn down the pelvic bone in several places. Moreover, the bone density, and thus also its strength, changes with increasing age. Medics therefore have to work out where best to place the screws that connect the artificial joint to the bone, and what shape the hip prosthesis needs to be in order to fit the surrounding bones as well as possible. At present, doctors examine patients using computer tomography (CT), and determine the rough density of the bones from the images. On the basis of various assumptions, they then calculate how strong the bones are in different places. The problem is that, although there are various theories on which the simulations can be based, the results often deviate significantly from reality. The consistency of the damaged bones is usually different from what the simulation leads to believe.

This is set to be changed by researchers at the Fraunhofer Institute for Machine Tools and Forming Technology IWU in Dresden and their colleagues at the biomechanics laboratory of the University of Leipzig. They are developing a model with which doctors can reliably and realistically calculate the density and elasticity of the bone from the CT scanner images. To this end, the researchers are transferring methods usually used for component testing to human hip bones, which involve inducing oscillations in the bone. This type of examination cannot be carried out on the patient. The bone has to be clamped into an apparatus. The researchers compare these results with scanned images of the bone and describe the correlations on the basis of a mathematical model. This should make it possible in future to determine the strength of a bone directly from the CT scanner images. The scientists have already performed the first examinations on prepared and thus preserved bones, and plan to induce oscillations in unprepared bones left in their natural state over the coming months. The researchers hope that in about two years’ time, doctors will be able to obtain a realistic simulation model of unprecedented quality from computer tomography data. The prostheses can then be perfectly anchored, and will be held safely in place for longer.

More information:

http://www.sciencedaily.com/releases/2009/01/090126082436.htm

24 January 2009

New Wireless Standard

Rapid transfer of a high-definition movie from a PC to a cell phone – plus a host of other media and data possibilities – is approaching reality. The Georgia Electronic Design Center (GEDC) at the Georgia Institute of Technology, has produced a CMOS chip capable of transmitting 60 GHz digital RF signals. This chip design could speed up commercialization of high-speed, short-range wireless applications, thanks to the low cost and power consumption of complementary metal oxide semiconductor (CMOS) technology. Among the many potential 60 GHz applications are virtually wireless desktop-computer setups and data centers, wireless home DVD systems, in-store kiosks that transfer movies to handheld devices in seconds and the potential to move gigabytes of photos or video from a camera to a PC almost instantly.

The GEDC-developed chip is the first 60GHz embedded chip for multimedia multi-gigabit wireless use. The chip unites 60GHz CMOS digital radio capability and multi-gigabit signal processing in an ultra-compact package. This new technology represents the highest level of integration for 60GHz wireless single-chip solutions. It offers the lowest energy per bit transmitted wirelessly at multi-gigabit data rates reported to date. Industry group Ecma International recently announced a worldwide standard for the radio frequency (RF) technology that makes 60 GHz “multi-gigabit” data transfer possible. The specifications for this technology, which involves chips capable of sending RF signals in the 60 GHz range, are expected to be published as an ISO standard in 2009.

More information:

http://www.sciencedaily.com/releases/2009/01/090122161953.htm

21 January 2009

Data Analysis On Small Computers

A powerful yet compact algorithm has been developed that can be used on laptop computers to extract features and patterns from huge and complex data sets. A powerful computing tool that allows scientists to extract features and patterns from enormously large and complex sets of raw data has been developed by scientists at University of California and Lawrence Livermore National Laboratory. The tool – a set of problem-solving calculations known as an algorithm – is compact enough to run on computers with as little as two gigabytes of memory. The team that developed this algorithm has already used it to probe a slew of phenomena represented by billions of data points, including analyzing and creating images of flame surfaces; searching for clusters and voids in a virtual universe experiment; and identifying and tracking pockets of fluid in a simulated mixing of two fluids. Computers are widely used to perform simulations of real-world phenomena and to capture results of physical experiments and observations, storing this information as collections of numbers.

But as the size of these data sets has burgeoned, hand-in-hand with computer capacity, analysis has grown increasingly difficult. A mathematical tool to extract and visualize useful features from data sets has existed for nearly 40 years – in theory. Called the Morse-Smale complex, it partitions sets by similarity of features and encodes them into mathematical terms. But working with the Morse-Smale complex is not easy. The algorithm divides data sets into parcels of cells, then analyzes each parcel separately using the Morse-Smale complex. Results of those computations are then merged together. As new parcels are created from merged parcels, they are analyzed and merged yet again. At each step, data that do not need to be stored in memory are discarded, drastically reducing the computing power required to run the calculations. One test of the algorithm was to use it to analyze and track the formation and movement of pockets of fluid in the simulated mixing of two fluids: one dense, one light. The complexity of this data set is so vast – it consists of more than one billion data points on a three-dimensional grid – it challenges even supercomputers.

More information:

http://www.sciencedaily.com/releases/2009/01/090108082531.htm

18 January 2009

Morphing Gel Display

A tactile display made from a watery gel that changes shape to show objects on its surface has been developed by German electrical engineers. It uses a hydrogel, the type of material used to make soft contact lenses, which consists mainly of water bound up within a polymer. Some hydrogels can swell or shrink in response to changing conditions like temperature or acidity. Researchers from the Technical University of Dresden turned to those abilities when trying to develop a new tactile display for blind people. The scientists created a square array of 4225 blobs of temperature-sensitive hydrogel, each approximately 300 microns across and separated from its neighbours by a similar amount. Just one square centimetre of the array contains 297 of the gel pixels. They sit on a black polyester backing that heats up when hit by a beam of light that is narrow enough to warm individual blobs.

Below 29 °C the pixels are 0.5 millimetres tall, but if heated to 35 °C they expel some of their water and become half as tall. They also become opaque and much harder to the touch. Rapidly scanning the light beam across the black backing makes it possible to display high-resolution, tactile images (see image) that change twice a second. Once the light beam moves away from a pixel, its temperature quickly drops and the gel swells back to its previous size, sucking up its lost water. To bring the shape changes into sharper relief and also prevent water from escaping, the gel is sealed beneath a plastic membrane. This system could be used to make tactile displays that communicate information at a person's touch. Such displays could be for blind people, or built into the interfaces of robotic surgery equipment to let human surgeons feel what is at a robot's fingertips. Some improvements are needed before this can become reality, such as reducing the temperatures at which the gel responds, but the team's prototype can already do most of what a display would require.

More information:

http://www.newscientist.com/article/dn16417-morphing-gel-display-puts-images-at-your-fingertips.html

15 January 2009

3D Virus Visualisation

Researchers at an I.B.M. laboratory have captured a 3D image of a biological virus using, for the first time, a technique that has some similarity to magnetic resonance imaging, a tool routinely used by physicians to peer inside the human body. Although the technique is akin to M.R.I., the results were 100 million times better in terms of resolution with the new technique, magnetic resonance force microscopy, or M.R.F.M. The team of researchers, reports in the ‘The Proceedings of the National Academy of Sciences’ that they have captured a 3D image of a tobacco mosaic virus with a spatial resolution down to four nanometers. Techniques like atomic force and scanning tunneling microscopes have provided images of individual atoms (an atom is about one-tenth of a nanometer in diameter). But these techniques are more destructive of biological samples because they send a stream of electrons at the target in order to get an image. And these microscopes cannot peer beneath the surface of the Lilliputian structures.

Magnetic resonance force microscopy employs an ultrasmall cantilever arm as a platform for specimens that are then moved in and out of proximity to a tiny magnet. At extremely low temperatures the researchers are able to measure the effect of a magnetic field on the protons in the hydrogen atoms found in the virus. By repeatedly flipping the magnetic field, the researchers are able to cause a minute vibration in the cantilever arm which can then be measured by a laser beam. By moving the virus through the magnetic field it is possible to build up a 3D image from many 2D samples. The researchers said they believed the tool would be of interest to structural biologists who are trying to unravel the structure and the interactions of proteins. It would be particularly useful for biological samples that cannot be crystallized for X-ray analysis. Although the structure of DNA molecules has already been characterized by other means, it will be possible to use the system both to look at the components that make up the basic DNA structure as well as to make images of interactions among biomolecules.

More information:

http://www.nytimes.com/2009/01/13/science/13mri.html?_r=1

12 January 2009

Education Workshop

On Wednesday 14th January 2009 Serious Games Institute (SGI) is organising another workshop in the area of education and serious games. Increasingly games and virtual technologies are being used in education with success.

These techniques of exploratory learning have the potential to change paradigmatically how we learn, what we learn and where we learn. This workshop will explore the paradigm shifts with respect to leading edge research and development projects in the field.

More information:

http://www.seriousgames.org.uk/events.aspx?item=555

03 January 2009

Making Accurate Digital Maps

European researchers have designed an innovative new system to help keep motorists on the right track by constantly updating their digital maps and fixing anomalies and errors. Now the partners are mapping the best route to market. The oddly enough sections of newspapers regularly feature amusing stories of GPS mayhem. For instance, one lorry driver in Poland had such confidence in his positioning device that he ignored several signs warning that a road had been closed to make way for an artificial reservoir and drove straight into the lake! In addition to providing a cautionary tale about investing too much faith in technology, this amusing anecdote highlights a more mundane and daily challenge: how to reflect the constantly shifting topography of Europe’s road network. A large number of digital maps used by onboard GPS navigation systems are stored on DVDs or hard disks, with periodic updates only available on replacement disks. In addition, advanced driver assistance systems (ADAS) – such as adaptive cruise control (ACC) and lane-keeping systems (LKS) – are beginning to make more extensive use of digital maps. Given the safety dimension of ADAS applications, it is crucial that digital maps are highly accurate.

Some interactive solutions have made it to market. One example is the EU-backed ActMAP project which developed mechanisms for online, incremental updates of digital map databases using wireless technology. The system helps to shorten the time span between updates significantly. Nevertheless, there is still room for improvement in terms of detecting map errors, changes in the real world, or monitoring highly dynamic events like local warnings automatically. Addressing these ever-changing realities requires a radical rethink of the applied methodology. The assumption behind ActMAP and other systems is that the supplier is responsible for all updates. However, this approach overlooks a valuable source of information: the motorists who use the navigation systems themselves. If anomalies found by road users could be automatically sent to the supplier, this could be used as a valuable supplementary source of information to iron out irregularities in maps and beam them back to the users. This bottom-up approach is the basic premise of FeedMAP, which has been designed to work in a loop with ActMAP. This means that, when the reality on the ground does not correspond with the digital map in the system, these so-called map deviations are automatically compiled into a map deviation report which is picked up by roadside sensors and relayed back to the supplier.

More information:

http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/90334

01 January 2009

VR Research Experimentation

Scientists and other researchers have found an appealing environment in virtual worlds such as "World of Warcraft" and Second Life. The draw of these virtual worlds for scientists is that they can conduct experiments whose results are similar to the real world without the expense of travel and physical construction. Universities and government agencies are conducting all kinds of research, both scientific and sociological, in virtual worlds. While the allure of a game world is attractive in itself, the research conducted in virtual worlds is the end-goal of the play. Scenarios presented in virtual worlds mimic real life scenarios that provide researchers the opportunity to gain insight into real-world responses as well as human behaviour. Experiments, including modern adaptations of the ethically controversial 1960s psychological experiment provided confirmation of previous results. The test asks subjects to administer shocks of increasing voltage to an individual who incorrectly recalls a series of word pairs. With each incorrect recollection, voltage would be increased. In 60’s tests, the person being ‘shocked’ was in no pain and only acted out the suffering. But the person administering the shocks didn't know that. Results in the virtual world were just as startling as in the real world.

The study's conclusion: Test subjects made no distinction between real and virtual tortured victims. There's nothing virtual about real dollar costs in either world. Turns out, though, that going virtual is the same as going on the cheap. For example of the savings: Virtual worlds cut out all need for real-being travel and related expenses, and no one gets sued if the test subject dies. Virtual worlds also cut the construct of custom environments to the cost of a few hours of a coder's time, and if an existing game world will suffice, there may not even be an admissions fee. Certainly there are no costly regulations to meet. The creators of Second Life have enabled residents to do virtually anything which has ultimately led to a low-cost way to experiment in a virtual setting. While we do work with educators and researchers to help them get the most out of their Second Life presence, they don't need our permission to conduct research as long as they do so in a way that is respectful of our community standards. The experiments are easy to set up in Second Life, speaking in terms of code, of course. Second Life provides an open platform for creativity and experimentation. That makes it very popular with academics, who use it to research everything from urban planning to computer science to psychology. It all raises the question: When a game comes to life, is life still a game? Who knows these days, but the stakes are sure real.

More information:

http://www.linuxinsider.com/story/Virtual-World-Research-Part-1-A-Place-to-Experiment-65656.html