28 September 2010

Simulations of Real Earthquakes

A Princeton University-led research team has developed the capability to produce realistic movies of earthquakes based on complex computer simulations that can be made available worldwide within hours of a disastrous upheaval. The videos show waves of ground motion spreading out from an epicenter. In making them widely available, the team of computational seismologists and computer scientists aims to aid researchers working to improve understanding of earthquakes and develop better maps of the Earth's interior. When an earthquake takes place, data from seismograms measuring ground motion are collected by a worldwide network of more than 1,800 seismographic stations operated by members of the international Federation of Digital Seismograph Networks. The earthquake's location, depth and intensity also are determined. The ShakeMovie system at Princeton will now collect these recordings automatically using the Internet. The scientists will input the recorded data into a computer model that creates a virtual earthquake. The videos will incorporate both real data and computer simulations known as synthetic seismograms. These simulations fill the gaps between the actual ground motion recorded at specific locations in the region, providing a more complete view of the earthquake. The animations rely on software that produces numerical simulations of seismic wave propagation in sedimentary basins.

The software computes the motion of the Earth in 3D based on the actual earthquake recordings, as well as what is known about the subsurface structure of the region. The shape of underground geological structures in the area not recorded on seismograms is key, Tromp said, as the structures can greatly affect wave motion by bending, speeding, slowing or simply reflecting energy. The simulations are created on a parallel processing computer cluster built and maintained by PICSciE and on a computer cluster located at the San Diego Supercomputing Center. After the three-dimensional simulations are computed, the software program plugs in data capturing surface motion, including displacement, velocity and acceleration, and maps it onto the topography of the region around the earthquake. The movies then are automatically published via the ShakeMovie portal. An e-mail also is sent to subscribers, including researchers, news media and the public. The simulations will be made available to scientists through the data management center of the Incorporated Research Institutions for Seismology (IRIS) in Seattle. The organization distributes global scientific data to the seismological community via the Internet. Scientists can visit the IRIS website and download information. Due to the research team's work, they now will be able to compare seismograms directly with synthetic versions.

More information:

http://www.sciencedaily.com/releases/2010/09/100922171608.htm

24 September 2010

Virtual Mediterranean Islands

Three-dimensional versions of Mediterranean islands will be updated virtually automatically with current information from a range of public and private databases. The European research project may launch a revolution in the tourist trade sector. MedIsolae-3D is a project that combined software designed for aircraft landing simulations with orthophotography and satellite images of the islands, as well as public data such as digital terrain models, maps and tourist services to create the portal to the 3D island experience. It has capitalised on the LANDING project that was also funded by the Aviation Sector of the EC/RTD programme. The plan is to link the virtual-visiting tool to web-geoplatforms such as Google Earth, MS Virtual Earth, or ESRI ArcGlobe to make it available to people across the globe. The EU-funded MedIsolae-3D project planned to deliver the service to more than 100 European Mediterranean islands – territories of Greece, Cyprus, France, Italy, Malta and Spain offer platforms for island visualisation.

One of the biggest challenges for the MedIsolae-3D team was to take data from local governments and other providers in a range of formats and data standards, and to use this data to produce a system capable of interoperating its sources to deliver a single virtual visiting service. MedIsolae-3D is an EU-funded project and it builds on the recent development of Inspire, a standardised Spatial Data Infrastructure (SDI) for Europe. Inspire, backed by an EU Directive creates a standard that allows the integration of spatial information services across the Union. Once standardised, users can access local and global level social services, in an interoperable way. The result of the combined datasets must be seamless for the user as they move from satellite generated images above the islands and onto the island’s roads and streets. Once the MedIsolae-3D framework is in place, it can work in combination with a range of spatial data services to aid tourism, transportation and other money-earners for the island economies, but it can also provide services for health and disaster planning, the environment, and policy-making.

More information:

http://cordis.europa.eu/ictresults/index.cfm?section=news&tpl=article&BrowsingType=Features&ID=91441

23 September 2010

VS-GAMES '11 Conference

The 3rd International Conference in Games and Virtual Worlds for Serious Applications 2011 (VS-GAMES 2011) will be held between 4-6 May, at the National Technical University of Athens (NTUA) in Athens, Greece. The emergence of serious or non-leisure uses of games technologies and virtual worlds applications has been swift and dramatic over the last few years. The 3rd International Conference in Games and Virtual Worlds for Serious Applications (VS-GAMES’11) aims to meet the significant challenges of the cross-disciplinary community that work around these serious application areas by bringing the community together to share case studies of practice, to present virtual world infrastructure developments, as well as new frameworks, methodologies and theories, and to begin the process of developing shared cross-disciplinary outputs.

We are seeking contributions that advance the state of the art in the technologies available to support sustainability of serious games. The following topics in the areas of environment, military, cultural heritage, health, smart buildings, v-commerce and education are particularly encouraged. Invited Speakers include Prof. Carol O Sullivan who is the Head of Graphics Vision and Visualisation Group (GV2) at Trinity College Dublin and Prof. Peter Comninos who is the Director of the National Centre for Computer Animation (NCCA) at Bournemouth University and MD of CGAL Software Limited. The best technical full papers will be published in a special issue of International Journal of Interactive Worlds (IJIW). The best educational papers will be submitted to the IEEE Transactions on Learning Technologies. The paper submission deadline is 1st Nov 2010.

More information:

http://www.vs-games.org/

22 September 2010

Virtual Human Unconsciousness

Virtual characters can behave according to actions carried out unconsciously by humans. Researchers at the University of Barcelona have created a system which measures human physiological parameters, such as respiration or heart rate, and introduces them into computer designed characters in real time. The system uses sensors and wireless devices to measure three physiological parameters in real time: heart rate, respiration, and the galvanic (electric) skin response. Immediately, the data is processed with a software programme that is used to control the behaviour of a virtual character who is sitting in a waiting room.

The heart rate is reflected in the movement of the character's feet; respiration in the rising of their chest (exaggerated movements so that it can be noticed); and the galvanic skin response in the more or less reddish colour of the face. The researchers conducted an experiment to see if the people whose physiological parameters were recorded had any preference as regards the virtual actor who was to use them, without them knowing in advance. But the result was negative, probably because other factors also influence the choice such as the character's appearance or their situation in the scene. The team is now studying how to solve this problem.

More information:

http://www.sciencedaily.com/releases/2010/09/100902073637.htm

18 September 2010

The Brain Speaks

In an early step toward letting severely paralyzed people speak with their thoughts, University of Utah researchers translated brain signals into words using two grids of 16 microelectrodes implanted beneath the skull but atop the brain. Because the method needs much more improvement and involves placing electrodes on the brain, it will be a few years before clinical trials on paralyzed people who cannot speak due to so-called ‘locked-in syndrome’. The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy - temporary partial skull removal - so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them. Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less. Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals - such as those generated when the man said the words ‘yes’ and ‘no’ - they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time - better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person's thoughts into words spoken by a computer. People who eventually could benefit from a wireless device that converts thoughts into computer-spoken spoken words include those paralyzed by stroke, Lou Gehrig's disease and trauma. The study used a new kind of nonpenetrating microelectrode that sits on the brain without poking into it. These electrodes are known as microECoGs because they are a small version of the much larger electrodes used for electrocorticography, or ECoG, developed a half century ago. For patients with severe epileptic seizures uncontrolled by medication, surgeons remove part of the skull and place a silicone mat containing ECoG electrodes over the brain for days to weeks while the cranium is held in place but not reattached. The button-sized ECoG electrodes don't penetrate the brain but detect abnormal electrical activity and allow surgeons to locate and remove a small portion of the brain causing the seizures.

More information:

http://www.unews.utah.edu/p/?r=062110-3

14 September 2010

Electric Skin Rivals the Real Thing

The tactile sensitivity of human skin is hard to re-create, especially over large, flexible surfaces. But two California research groups have made pressure-sensing devices that significantly advance the state of the art. One, made by researchers at Stanford University, is based on organic electronics and is 1,000 times more sensitive than human skin. The second, made by researchers at the University of California, Berkeley, uses integrated arrays of nanowire transistors and requires very little power. Both devices are flexible and can be printed over large areas.

Highly sensitive surfaces could help robots pick up delicate objects without breaking them, give prosthetics a sense of touch, and give surgeons finer control over tools used for minimally invasive surgery. Their goal is to mimic the human skin, which responds quickly to pressure, and can detect objects as small as a grain of sand and light as an insect. This approach can be used to make flexible materials with inexpensive printing techniques, but the resulting device requires high voltages to operate.

More information:

http://www.technologyreview.com/computing/26256/?a=f

13 September 2010

3D Movies via Internet & Satellite

Multiview Video Coding (MVC) is the new standard for 3D movie compression. While reducing the data significantly, MVC allows at the same time providing full high-resolution quality. Blockbusters like Avatar, UP or Toy Story 3 will bring the 3D into home living rooms, televisions and computers. There are already displays available and the new Blu-Ray players can already play 3D movies based on MVC. The first soccer games were recorded stereoscopically at the Football World Championships in South Africa. What is missing is an efficient form of transmission. The problem is the data rate required by the movies – in spite of fast Internet and satellite links. 3D movies have higher data rate requirements than 2D movies since at least two images are needed for the spatial representation. This means that a 3D screen has to show two images – one for the left and one for the right eye.


Researchers at the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, HHI in Berlin, Germany have already come up with a compression technique for movies in particularly HD quality that squeezes movies while maintaining the quality: the H.264/AVC video format. What H.264/AVC is for HD movies, Multiview Video Coding (MVC) is for 3D movies. The benefit is reducing the data rate used on the transmission channel while maintaining the same high-definition quality. Videos on the Internet have to load quickly so that the viewer can watch the movies without interruptions. MVC packs the two images needed for the stereoscopic 3D effect so that the bit rate of the movies is significantly reduced. These 3D movies are up to 40 percent smaller. Users will be able to experience 3D movies in their living room in near future.

More information:

http://www.fraunhofer.de/en/press/research-news/2010/08/3d-movies-via-internet-und-satellit.jsp

06 September 2010

EmotionML

For all those who believe the computing industry is populated by people who are out of touch with the world of emotion, it's time to think again. The World Wide Web Consortium (W3C), which standardizes many Web technologies, is working on formalizing emotional states in a way that computers can handle. The name of the specification, which in July reached second-draft status, is Emotion Markup Language. EmotionML combines the rigor of computer programming with the squishiness of human emotion. But the Multimodal Interaction Working Group that's overseeing creation of the technology really does want to marry the two worlds. Some of the work is designed to provide a more sophisticated alternative to smiley faces and other emoticons for people communicating with other people.

It's also geared to improve communications between people and computers. The idea is called affective computing in academic circles, and if it catches on, computer interactions could be very different. Avatar faces could show their human master's expression during computer chats. Games could adjust play intensity according to the player's reactions. Customer service representatives could be alerted when customers are really angry. Computers could respond to your expressions as people do. Computer help technology like Microsoft's Clippy or a robot waiter could discern when to make themselves scarce. EmotionML embodies two very different forms of expression--the squishy nature of emotion and the rigorously precise language of a standard.

More information:

http://news.cnet.com/8301-30685_3-20014967-264.html