30 August 2009

Gaming Takes On Augmented Reality

Augmented reality - the ability to overlay digital information on the real world - is increasingly finding its way into different aspects of our lives. Mobile phone applications are already in use to find the nearest restaurants, shops and underground stations. And the technology is also starting to enter the world of gaming. Developers are now exploring its potential for a new genre of entertainment, and to create applications that seamlessly integrate our real and virtual identities. The gaming world has been toying with the idea of using augmented reality to enhance the user's experience. Virus Killer 360 by UK-firm Acrossair uses a mobile phone's GPS and compass to turn real images of the world into the game board. This immersive 360-degree game shows the user surrounded by viruses when moving the handset - the aim is to kill the spores before they multiply. The possibilities of augmented reality are explored afresh in the Eye of Judgement. The PlayStation 3 game uses a camera and a real set of cards to bring monsters to "life" to do battle. Gamers move the device around over a 2D map to recreate it as a 3D gaming environment. Like Nintendo's Wii, actions affect the game, except this game offers 360-degree freedom of movement. Handsets and handheld consoles are not powerful enough to do this yet, but graphics specialists and researchers believe the tech is only one to two years away.

We could soon be using augmented reality to tell us more about the people around us. Users in a Swedish trial set their profiles on a mobile phone, deciding what elements of their online self they want to share, including photos, interests, and Facebook updates. For instance, someone giving a presentation at a meeting could choose to share their name and the slides for others to see. Others in the room could point their phone at the person to download the information directly to their handset. But for this mash-up of social networking and augmented reality to work, face-recognition software will have to be improved. The development of this technology could be speeded up if the app was limited to the contacts in a user's mobile phone. The ultimate goal of augmented reality is for information just to appear as people go about their daily tasks. For instance, a camera worn around the neck would read a book title, get reviews from Amazon, and project the results back onto the book. Researchers at Massachusetts Institute of Technology are exploring the possibilities of object recognition. One day it may be possible to take a photo simply by making a rectangle shape with their fingers, they believe.

More information:

http://news.bbc.co.uk/1/hi/programmes/click_online/8226777.stm

26 August 2009

HCI 2009 Article

Last month, a co-authored paper with title ‘Assessing the Usability of a Brain-Computer Interface (BCI) that Detects Attention Levels in an Assessment Exercise’ was presented by a colleague of mine at the 13th International Conference on Human-Computer Interaction at San Diego, California, USA. The paper presented the results of a usability evaluation of the NeuroSky’s MindBuilder –EM (MB). Until recently most Brain Computer Interfaces (BCI) have been designed for clinical and research purposes partly due to their size and complexity. However, a new generation of consumer-oriented BCI has appeared for the video game industry. The MB, a headset with a single electrode, is based on electro-encephalogram readings (EEG) capturing faint electrical signals generated by neural activity.

The electrical signals across the electrode are measured to determine levels of attention and then translated into binary data. The paper presented the results of an evaluation to assess the usability of the MB by defining a model of attention to fuse attention signals with user-generated data in a Second Life assessment exercise. The results of this evaluation suggest that the MB provides accurate readings regarding attention, since there is a positive correlation between measured and self reported attention levels. The results also suggest there are some usability and technical problems with its operation. Future research is presented consisting of the definition of a standardized reading methodology and an algorithm to level out the natural fluctuation of users’ attention levels when used as inputs.

A draft version of the paper can be downloaded from here.

24 August 2009

Modified 3D HDTV LCD Screens

For the first time, a team of researchers at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego, have designed a 9-panel, 3D visualization display from HDTV LCD flat-screens developed by JVC. The technology, dubbed "NexCAVE," was inspired by Calit2's StarCAVE virtual reality environment and designed and developed by Calit2 researchers. Although the StarCAVE's unique pentagon shape and 360-degree views make it possible for groups of scientists to venture into worlds as small as nanoparticles and as big as the cosmos, its expensive projection system requires constant maintenance — an obstacle DeFanti and Dawe were determined to overcome. Researchers developed the NexCAVE technology at the behest of Saudi Arabia's King Abdullah University of Science and Technology (KAUST), which established a special partnership with UC San Diego last year to collaborate on world-class visualization and virtual-reality research and training activities. The NexCAVE technology was inspired by Calit2's StarCAVE virtual reality environment. The KAUST campus includes a Geometric Modeling and Scientific Visualization Research Center featuring a 21-panel NexCAVE and several other new visualization displays developed at Calit2. Classes at the brand-new, state-of-the-art, 36-million square meter campus start Sept. 5. When paired with polarized stereoscopic glasses, the NexCAVE's modular, micropolarized panels and related software will make it possible for a broad range of UCSD and KAUST scientists — from geologists and oceanographers to archaeologists and astronomers — to visualize massive datasets in three dimensions, at unprecedented speeds and at a level of detail impossible to obtain on a myopic desktop display.

The NexCAVE's technology delivers a faithful, deep 3D experience with great color saturation, contrast and really good stereo separation. The JVC panels' xpol technology circularly polarizes successive lines of the screen clockwise and anticlockwise and the glasses you wear make you see, in each eye, either the clockwise or anticlockwise images. This way, the data appears in three dimensions. Since these HDTVs are very bright, 3D data in motion can be viewed in a very bright environment, even with the lights in the room on. The NexCAVE's data resolution is also superb, close to human visual acuity (or 20/20 vision). The 9-panel, 3-column prototype that his team developed for Calit2's VirtuLab has a 6000x1500 pixel resolution, while the 21-panel, 7-column version being built for KAUST boasts 15,000x1500-pixel resolution. The costs for the NexCAVE in comparison to the StarCAVE are also considerably cheaper. The 9-panel version cost under $100,000 to construct, whereas the StarCAVE is valued at $1 million. One-third of that cost comes from the StarCAVE's projectors, which burn through $15,000 in bulbs per year. Every time a projector needs to be relamped, the research team must readjust the color balance and alignment, which is a long, involved process. Since the NexCAVE requires no projector, those costs and alignment issues are eliminated. The NexCAVE'S tracker (the device used to manipulate data) is also far less expensive — it's only $5,000 as compared to the StarCAVE's $75,000 tracker, although its range is more limited. NexCAVE's specially designed COVISE software (developed at Germany's University of Stuttgart) combines the latest developments from the world of real-time graphics and PC hardware to allow users to transcend the capabilities of the machine itself. The machine will also be connected via 10 gigabit/second networks, which allows researchers at KAUST to collaborate remotely with UCSD colleagues. The NexCAVE uses game PCs with high end Nvidia game engines.

More information:

http://www.calit2.net/

http://www.kaust.edu.sa/

http://ucsdnews.ucsd.edu/newsrel/general/08-09NexCave.asp

20 August 2009

Serious Virtual Worlds '09 Conference

The Serious Virtual Worlds 2009 (SVW09) conference with title ‘Real Value for Public and Private Organisations’ will be hosted by the globally renowned Serious Games Institute in Coventry University, and jointly run by Ambient Performance. It is also supported by the Digital and Creative Technologies Network and aims in this conference will show a whole new world of business.

Large and small organisations are using virtual worlds to meet, work and simulate working situations. SVW09 focuses on business applications of Virtual Worlds and will highlight a variety of case studies that will demonstrate the economic and ecological benefits of using these virtual spaces. Case studies featuring organisations already working in Virtual Worlds including: BP, Afiniti, Highways Agency, StormFjord and Schools for the Future.

More information:

http://www.seriousvirtualworlds.net/

18 August 2009

Unraveling Ancient Documents

Computer science and humanities departments have joined forces at Ben-Gurion University in Beersheba to decipher historical Hebrew documents, a large number of which have been overwritten with Arabic stories. The unique algorithm being used to determine the wording was developed by BGU computer scientists. The documents are searched electronically, letter by letter, for similarities in handwriting which help determine the date and author of the texts. The documents being deciphered at BGU are degraded texts from sources such as the Cairo Geniza, the Al-Aksa manuscript library in Jerusalem, and the Al-Azar manuscript library in Cairo. All together, the base consists of 100,000 medieval Hebrew codices and their fragments that represent the book production output of only the last six centuries of the Middle Ages. The purpose of the project is to classify the handwritten documents and determine their authorship. One problem is that many of the original Hebrew texts which were found in the Cairo Geniza have been scratched off, and the parchment used to write an Arabic text.

Although the texts are in Hebrew, the task of deciphering what is written is difficult because the historical documents have degraded over time. Now, the foreground and background lettering are hard to separate and there are smudges on the ink of much of the text which intensifies the background coloring. Furthermore, ink from the alternate side of the document adds blotches to the lettering. To solve the problem, the algorithm is used to cover the text in a dark grey color, which then highlights lighter colored pixels as background space and identifies the darker pixels as outlining the original Hebrew lettering. There are two separate academic disciplines interested in driving this project forward. First, linguistic specialists seek to gain a deeper appreciation of the origins of the Hebrew language. Second, Jewish philosophers are interested in studying ancient forms of prayer that are thought to be contained in the texts. With the new algorithm, researchers hope to create a catalogue of all the texts and piece together the ancient prayers and other documents, including those citing Jewish law.

More information:

http://www.jpost.com/servlet/Satellite?cid=1249418581591&pagename=JPost%2FJPArticle%2FShowFull

17 August 2009

Smarter GPS Using Linux

Sick of having your GPS tell you to turn the wrong way up a one-way street or lead you to a dead end? Fear not: Linux-based technology developed at NICTA is on its way to help make personal navigation systems more accurate. AutoMap, developed by National ICT Australia (NICTA), uses machine vision techniques that can detect and classify geometric shapes from video footage. These shapes include things like signs and company logos: the type of fixtures that change frequently in a neighborhood and make it difficult for digital map makers to keep their products up to date. Currently, to keep on top of this, the mapping companies need to get someone to physically drive up and down each street in a van with five or six cameras fixed in all directions. There will be a driver and a co-driver who will sit and make annotations. They then take this footage back to the office where they have an army of slaves who will look at this footage frame by frame and record where all the signs are. What they provide is an intelligent solution that can automatically detect signs from video footage without having to employ an army of slaves to do it. The AutoMap system uses some of the technology developed as part of an earlier smart cars project.

Although the product is now ready for commercial deployment and discussions are underway with the significant mapping companies, research on the project will continue. They are looking at placing this technology inside a little camera and putting it in taxis, fleet vehicles, and garbage trucks that are going about their business. These vehicles will traverse the whole road network on a regular basis. They will be able to automatically detect points of interest and automatically send this information back to base where a complete and constantly updating map emerges over time. The research team will also be developing methods to recognise three-dimensional images like park benches and speed cameras. This research and technology is almost entirely Linux-based. The research team also use an Intel-based UMPC (an ASUS R50A). NICTA predicts that the digital mapping market will expand significantly as companies like Google, Microsoft and Yahoo continue to develop and release location based services. Whilst these companies currently purchase some mapping information from digital map producers it is expected they will quickly shift to developing and maintaining their own databases.

More information:

http://www.computerworld.com.au/index.php?q=article/313968/new_linux-based_technology_make_smarter_gps&fp=&fpid=

11 August 2009

Games Solve Complex Problems

A new computer game prototype combines work and play to help solve a fundamental problem underlying many computer hardware design tasks. The online logic puzzle is called FunSAT, and it could help integrated circuit designers select and arrange transistors and their connections on silicon microchips, among other applications. Designing chip architecture for the best performance and smallest size is an exceedingly difficult task that's outsourced to computers these days. But computers simply flip through possible arrangements in their search. They lack the human capacities for intuition and visual pattern recognition that could yield a better or even optimal design. That's where FunSAT comes in. Developed by University of Michigan computer science researchers, FunSAT is designed to harness humans' abilities to strategize, visualize and understand complex systems. A single-player prototype implemented in Java already exists and researchers are working on growing it to a multi-player game, which would allow more complicated problems to be solved. By solving challenging problems on the FunSAT board, players can contribute to the design of complex computer systems, but you don't have to be a computer scientist to play. The game is a sort of puzzle that might appeal to Sudoku fans. The board consists of rows and columns of green, red and gray bubbles in various sizes.

Around the perimeter are buttons that players can turn yellow or blue with the click of a mouse. The buttons' color determines the color of bubbles on the board. The goal of the game is to use the perimeter buttons to toggle all the bubbles green. Right-clicking on a bubble tells you which buttons control its color, giving the player a hint of what to do next. The larger a bubble is, the more buttons control it. The game may be challenging because each button affects many bubbles at the same time and in different ways. A button that turns several bubbles green will also turn others from green to red or gray. The game actually unravels so-called satisfiability problems—classic and highly complicated mathematical questions that involve selecting the best arrangement of options. In such quandaries, the solver must assign a set of variables to the right true or false categories so to fulfill all the constraints of the problem. In the game, the bubbles represent constraints. They become green when they are satisfied. The perimeter buttons represent the variables. They are assigned to true or false when players click the mouse to make them yellow (true) or blue (false). Once the puzzle is solved and all the bubbles are green, a computer scientist could simply look at the color of each button to gather the solution of that particular problem. Satisfiability problems arise not only in complex chip design, but in many other areas such as packing a backpack with as many items as possible, or searching for the shortest postal route to deliver mail in a neighborhood.

More information:

http://funsat.eecs.umich.edu/

http://www.ns.umich.edu/htdocs/releases/story.php?id=7252

08 August 2009

Virtual Worlds Scientific Collaboration

Normally, virtual worlds are the setting of many online games and entertainment applications, but now they’re becoming a place for scientific collaboration and outreach, as well. A team of scientists from the California Institute of Technology, Princeton, Drexel University, and the Massachusetts Institute of Technology have formed the first professional scientific organization based entirely in virtual worlds. Called the Meta Institute for Computational Astrophysics (MICA), the organization conducts professional seminars and popular lectures, among other events, for its growing membership. As MICA’s founders explain in a recent published paper, MICA is currently based in Second Life where participants use avatars to explore and interact with their surroundings, and will expand to other virtual worlds when appropriate. As of this past March, MICA had about 40 professional members and 100 members of the general public interested in learning about science, specifically astronomy. MICA is also establishing collaborative partnerships with the IT industry, including Microsoft and IBM, and plans to further develop industrial partnerships. In addition to getting people together in a free and convenient way, virtual worlds can offer new possibilities for scientific visualization or ‘visual analytics’. As data sets become larger and more complex, visualization can help researchers better understand different phenomena.

Virtual worlds not only offer visualization, but also enable researchers to become immersed in data and simulations, which may help scientists think differently about data and patterns. Multi-dimensional data visualization can provide further advantages for certain types of data. The researchers found that they can encode data in spaces with up to 12 dimensions, although they run into the challenge of getting the human mind to easily grasp the encoded content. MICA members from around the world can participate in informal discussions in virtual worlds. In the future, virtual reality will become more synthesized with the Web by serving as an interface and replacing today’s browsers. One part of this possible next generation application of virtual reality is an open source program called “OpenSimulator” (or “OpenSim”), which enables users to create their own 3D virtual worlds and applications. The authors predict that the synthesis of the Web and virtual reality could involve individuals managing their own virtual reality environment in a way that is analogous to hosting and managing websites today. Researchers also plan to conduct a series of international summer schools on topics including numerical stellar dynamics, computational science and others, in an immersive and interactive virtual world venue.

More information:

http://pda.physorg.com/_news168608901.html

http://www.mica-vw.org/