30 January 2010

Future of Virtual Worlds

Since the release of his massive hit ‘Avatar’, director James Cameron has gotten plenty of deserved attention for his filmmaking innovations, having invented a camera system that captured live footage of his actors and integrated it immediately into fleshed-out scenes from his fictional world of Pandora. But movies may not be the only medium Cameron's innovation is pushing toward the future. In fact, the technology he and his visual effects partners built for the record breaking film may also provide our first real glimpse of the future of 3D virtual worlds. Today's virtual worlds have attracted millions of users, significant venture capital and sometimes impressive revenues. But some experts think it's a no-brainer that augmented reality tools like Cameron used to turn ‘Avatar’ into history's highest-grossing film could soon be the core of what millions of people experience in 3D virtual worlds that until now, we've only been able to dream about. Today, the term ‘virtual world’ means a lot of things to a lot of people. To many, it means 2D online social games like Gaia Online or Club Penguin. To some, it means large-scale massively-multiplayer online games like World of Warcraft. And to others, its open-ended 3D experiences like Second Life.

After Second Life took the world by storm in 2005 and 2006, introducing many to a 3D environment in which they could create nearly anything they wanted, there hasn't been a major next step forward. One could argue that virtual worlds have even taken a technological step backward, as most of the energy in the space these days is being put into building 2D Flash worlds for kids, or Facebook games played by the masses. It's big business, but hardly cutting edge. What Cameron created for ‘Avatar’ is the first peek at a tremendous leap of innovation, one in which huge audiences could be using virtual worlds that feature vast arrays of content and data coming in from sources like Disney and Google, or individual users themselves, and devices like Cameron's ‘Avatar’ camera, iPhones or car windshields with augmented reality overlays. An iPhone app that you point at the sky, which shows you all kinds of information about the stars and constellations you're looking at. That is the beginning of ubiquitous augmented reality and virtual worlds. For others, however, their demands might be more immersive: a virtual palace, a climbing gym or even a virtual representation of someone's farm, complete with avatars to welcome visitors.

More information:

http://news.cnet.com/8301-13772_3-10443265-52.html

26 January 2010

Interactive Worlds Journal

I have founded a new open access journal, called International Journal of Interactive Worlds (IJIW). The aim of IJIW is to disseminate research conducted in the area of Interactive Worlds and its related domains. Ranging from mobile devices that augment the real world to the virtual environments of simulations and computer games, Interactive Worlds have established themselves as elements of everyday life. Due to the multidisciplinary nature of this area the journal will welcome submissions from a wide range of topics. The emphasis will be on the publication of high quality articles, from descriptions of specific algorithms to full system implementations, rapidly and freely available to researchers, practitioners and decision makers worldwide.

IJIW will also organise regular special issues, as well as accept state of the art reports and communications. It will also accept best papers from peer reviewed conferences. The mission of the journal is to promote the synthesis of knowledge generated in domains related to Interactive Worlds and the publication of peer-reviewed technical content that covers current research and development in this new field of study. Manuscripts will be evaluated for originality, significance, clarity, and contribution. Submitted manuscript must not have been previously published or currently submitted for publication elsewhere and will follow a blind peer review process.

More information:

http://www.ibimapublishing.com/journals/IJIW/ijiw.html

22 January 2010

Playing Games for Real Recovery

Remotely monitored in-home virtual reality videogames improved hand function and forearm bone health in teens with hemiplegic cerebral palsy, helping them perform activities of daily living such as eating, dressing, cooking, and other tasks for which two hands are needed. While these initial encouraging results were in teens with limited hand and arm function due to perinatal brain injury, we suspect using these games could similarly benefit individuals with other illness that affect movement, such as multiple sclerosis, stroke, arthritis and even those with orthopedic injuries affecting the arm or hand, researchers reported from from Indiana University School of Medicine. A pediatric neurologist at Riley Hospital for Children, she is the first author of a pilot study which reported on the rehabilitative benefits of these custom videogames. This project was done in collaboration with the Rutgers University Tele-Rehabilitation Institute. The researchers also reported that improved hand function appears to be reflected in brain activity changes as seen on functional magnetic resonance imaging (fMRI) scans.

The three study participants were asked to exercise the affected hand about 30 minutes a day, five days a week using a specially fitted sensor glove linked to a remotely monitored videogame console installed in their home. Games, such as one making images appear were custom developed at Rutgers, calibrated to the individual teen's hand functionality, included a screen avatar of the hand, and focused on improvement of whole hand function. In the future, physical therapists could remotely monitor patients' progress and make adjustments to the intensity of game play to allow progressive work on affected muscles. In addition to meeting an unfulfilled need, this could potentially also save healthcare dollars and time. Long term physical rehabilitation is costly. And even if cost is not an issue, taking an adolescent out of school and transporting him or her to the hospital or rehab center puts stress on both the patient and their parents. These specially developed games motivated rehabilitation exercises in the home at a time convenient for the teens, broadening access to rehabilitation.

More information:

http://www.sciencedaily.com/releases/2010/01/100112135042.htm

20 January 2010

Human Behaviour Computer Vision

A consortium of European researchers, coordinated by the Computer Vision Centre (CVC) of Universitat Autònoma de Barcelona (UAB), has developed HERMES, a cognitive computational system consisting of video cameras and software able to recognise and predict human behaviour, as well as describe it in natural language. The applications of the Hermes project are numerous and can be used in the fields of intelligent surveillance, protection of accidents, marketing, psychology, etc. HERMES (Human Expressive Graphic Representation of Motion and their Evaluation in Sequences) analyses human behaviour based on video sequences captured at three different focus levels: the individual as a relatively distant object; the individual's body at medium length so as to be able to analyse body postures; and the individual's face, which allows a detailed study of facial expressions. The information obtained is processed by computer vision and artificial intelligence algorithms, which permits the system to learn and recognise movement patterns.

HERMES offers two important innovations in the field of computer vision. The first is the description of in natural language movement captured by the cameras, through simple and precise phrases which appear on the computer screen in real time, together with the frame number in which the action is taking place. The system uses an avatar to talk and describe this information in different languages. The second innovation is the possibility to analyse and discover potentially unusual behaviour - based on the movements it recognises - and give off warning signals. For example, HERMES sends a signal to the control centre of an underground station after capturing the image of someone trying to cross the tracks, or alerts a medical centre if an elderly person living alone falls. The application advantages of HERMES are obvious, mainly in the fields of intelligent surveillance and the prevention of accidents or crimes.

More information:

http://www.uab.es/servlet/Satellite/latest-news/news-detail/new-computer-vision-system-for-the-analysis-of-human-behaviour-1096476786473.html?noticiaid=1263281686625

16 January 2010

Reading Your Mind to Tag Images

The most valuable machine you own may be between your ears. Work done at Microsoft Research is using electroencephalograph (EEG) measurements to ‘read’ minds in order to help tag images. When someone looks at an image, different areas of their brain have different levels of activity. This activity can be measured and scientists can reasonably determine what the person is looking at. It only takes about half a second to read the brain activity associated with each image, making the EEG process much faster than traditional manual tagging. The ‘mind-reading’ technique may be the first step towards a hybrid system of computer and human analysis for images and many other forms of data. Whenever an image is entered into a database, it is typically tagged with labels manually by humans. This work is tedious and repetitive so companies have to come up with interesting ways to get it done on the cheap. Amazon’s Mechanical Turk offers very small payments to those who wish to tag images online. Google Image Labeler has turned the process into a game by pairing taggers to counterparts with whom they can work together. Because EEG image tagging requires no conscious effort, workers may be able to perform other tasks during the process. Eventually EEG readings, or those fMRI techniques that some hope to adopt into security checks, could be used to harness the brain as a valuable analytical tool. Human and computer visual processing have separate strengths. While computers can recognize shapes and movements very well, they have a harder time with categorizing objects in human terms.

Brains and computers working in conjunction could one day provide rapid identification and decision making, even without human conscious effort. This could have a big impact on security surveillance and robotic warfare. EEG readings are taken at the surface of the head and provide only a general guideline to which areas of the brain are active at what times. Yet this limited information is enough to distinguish between several useful scenarios. Researchers could determine if someone was looking at a face or an inanimate object. They also saw good results when contrasting animals with faces, animals with inanimate objects, and some 3-way classifications (i.e. faces vs. animals vs. inanimate objects). Better results were seen with multiple users, and when each image was viewed multiple times. Surprisingly, no improvement was seen if the viewer was given more than half a second to look at each image. This means that the images could be displayed at that speed without any loss of accuracy in tagging. During the experiments, test subjects were given distracting tasks and not told to categorize the images they saw, showing that the conscious mind does not have to be engaged (and in fact, should not be used) to provide the tagging information at that speed. In order to replace current tagging systems, researchers will have to find ways to determine when humans are making more precise comparisons. They will also have to find the most efficient numbers of viewers and viewing instances that provide the most accurate tags. The future could see human brains and computers cooperating together in new ways. We may not even have to be paying attention to work.

More information:

http://singularityhub.com/2010/01/10/reading-your-mind-to-tag-images-and-work-with-computers/

14 January 2010

A Virtual Liver

Medical imaging of organs and tissues has contributed greatly to diagnosis and therapy planning, especially in the treatment of cancers, which are the major cause of deaths worldwide. However the 2D scanning images possible until now have been difficult to interpret, and it has not been possible to consult others who are not present in person. The EUREKA project Odysseus has developed software for 3D-imaging of the blood vessels of a patient’s liver which has materially advanced medical understanding of how the liver is segmented. The 3D modelling has shown that up to 50% of patients have a significantly different liver structure from the Couinaud description.

Virtual Patient Modelling (VR-Anat, formerly known as 3D-VPM) uses patient-specific data to enable preoperative assessment. Diagnosis and Virtual Planning (VR Planning, formerly 3D DVP) is software which enables navigation and tool positioning within 3D images that can be reconstructed from any multimedia equipped computer. The unlimited laparoscopic simulator (ULIS) and the robotic surgery simulator (SEP Robot) added realistic physical properties of texture and tissue resistance to the 3D model of the patient, allowing surgical intervention to be simulated before real surgery.

More information:

http://www.eurekanetwork.org/c/document_library/get_file?uuid=019bd807-5ef3-4bfd-a8bc-7b153f59d36c&groupId=10137

10 January 2010

A Virtual Physician's Conference

Telemedicine facilitates communication between family physicians, hospitals and nursing services -- yet current solutions lack flexibility and are consequently very expensive. A new software program is now available that can be tailored to a range of applications. Wounds suffered by patients with diabetes tend to heal poorly. For treatment to work, the patient's physician must discuss the situation with specialists and nursing staff to decide on the best approach. However, e-mailing the files containing the diagnosis and discussing them on the telephone is a time-consuming process. Telemedicine could facilitate communication and provide a better means of overcoming physical distance, but the solutions offered to date have failed to establish a market presence. "Currently available software mostly comprises one-off solutions that are difficult to adapt to alternative application scenarios, researchers from the Fraunhofer Institute for Software and Systems Engineering ISST mentioned. The software therefore has to be re-programmed for each application, which is a costly, time-consuming business. In collaboration with the Protestant Hospital in the town of Witten, researchers at the ISST have now developed a software program that makes coordination both simple and cost-effective.

The software is used for a weekly ‘Wound Conference’ in Witten, in which doctors present problematic wounds that are not healing properly and discuss possible courses of treatment. Doctors can click on a link to register and download the program, which includes an easy-to-use installation wizard. Once a doctor has obtained their patient's consent, they can enter the patient's data in an on-screen form, including a description of the wound and any laboratory findings. The doctor can then upload photos of the wound using a barcode that was photographed together with the wound. The barcode automatically assigns the images to the patient's file, and the doctor can add updated photos whenever required. To check how the healing process is going, conference participants simply click to display the photos in a series. In addition, the software automatically pulls in new information on how treatment is progressing. All the data is stored centrally on one of the hospital's servers. More than 300 cases have already been documented in the virtual network, and the researchers now intend to expand the pool of basic services and assess requirements for new services.

More information:

http://www.sciencedaily.com/releases/2009/12/091207123751.htm

08 January 2010

The Future of BCI

In the shimmering fantasy realm of the hit movie ‘Avatar’, a paraplegic Marine leaves his wheelchair behind and finds his feet in a new virtual world thanks to ‘the link’, a sophisticated chamber that connects his brain to a surrogate alien, via computer. This type of interface is a classic tool in gee-whiz science fiction. But the hard science behind it is even more wow-inducing. Researchers are already using brain-computer interfaces to aid the disabled, treat diseases like Parkinson's and Alzheimer's, and provide therapy for depression and post-traumatic stress disorder. Work is under way on devices that may eventually let you communicate with friends telepathically, give you superhuman hearing and vision or even let you download data directly into your brain, a la ‘The Matrix’. Researchers are practically giddy over the prospects. At the root of all this technology is the 3-pound generator we all carry in our head. It produces electricity at the microvolt level. But the signals are strong enough to move robots, wheelchairs and prosthetic limbs -- with the help of an external processor. Brain-computer interfaces (BCI) come in two varieties. Noninvasive techniques use electrodes placed on the scalp to measure electrical activity. Invasive procedures implant electrodes directly into the brain. In both cases, the devices interact with a computer to produce a wide variety of applications, ranging from medical breakthroughs and military-tech advances to futuristic video games and toys.

Much of the research focuses on neuroprosthetics, which offer a way for the brain to compensate for injuries and illness. Cochlear implants are the most common neuroprosthetic. They help the brain interpret sounds and are sometimes called ‘bionic ears’ for the deaf. Other researchers are looking for similar ways to help blind people see. None of this comes cheap. Most research is funded by deep pockets such as the National Institutes of Health, the defense department and NASA. But every breakthrough brings the most advanced BCI technologies closer to the mass market. Software entrepreneurs and executives are streaming into Boyden's neuro-ventures class at MIT, looking for ways to capitalize on the array of potential uses for brain-computer interfaces. Some ventures are already up and running. NeuroVigil in California is working on iBrain, designed, in part, to help provide instant feedback to drivers who start falling asleep at the wheel. Eos Neuroscience is developing light-sensitive protein-based sensors that can treat blindness. Numerous companies are developing video games based on direct brain-computer interfacing. Neurosky sells a wireless headset that connects to any computer for a series of brain-training games. NeuroBoy lets you set targets on fire just by concentrating on them. Relax, and your character levitates. Another application lets you see a colorful visualization of your brain-wave activity.

More information:

http://www.cnn.com/2009/TECH/12/30/brain.controlled.computers/index.html