29 February 2008

OpenGL for Embedded Systems

The early mobile 3D engines were all proprietary, and their rendering pipelines were implemented in software running on general-purpose CPUs. In 2002, to provide a baseline for portable 3D hardware acceleration, the Khronos Group began to design a 3D standard for mobile devices. The idea was to take OpenGL, the most widely deployed 3D API, as a starting point to produce a leaner and cleaner version: OpenGL ES (embedded systems). Simplification involved removing rarely used features (e.g., feedback and selection rendering modes, and functionality that's mostly syntactic sugar, such as the GL utility library). It also meant removing redundancy. For example, OpenGL provides a number of different ways of defining the rendering primitives. OpenGL ES provides only one: all vertex data is provided in arrays, and this results in a simpler implementation and faster execution. Also the set of supported geometric primitives is limited to points, lines, and triangles.

OpenGL ES retains most of the transformation and lighting pipeline of OpenGL. Only the back buffer is accessible for drawing and reading, however, and only the RGBA color mode (no indexed color mode) is supported. Features easily emulated using texture mapping (bitmaps and stippling of lines and polygons) were dropped; all key 2D texture-mapping features were retained. The OpenGL ES graphics pipeline shown in the adjacent figure has only a couple of well-defined points for providing input, and it has few outputs other than the values rendered into the frame buffer. Therefore, functionality in the back end, which cannot be easily replaced by the application programmer, is much more costly to remove than functionality in the front end of the pipeline. As a consequence, OpenGL ES supports almost all of the back-end functionality of OpenGL 1.3. For example, the blending modes that dictate how new fragments should be mixed with existing pixels in the frame buffer were retained in full, as were the various tests for determining whether a pixel should be drawn.

More information:


25 February 2008

CGIM 2008 Article

Last week, a colleague of mine has presented a paper I co-authored in the 10th IASTED International Conference on Computer Graphics and Imaging, held in Austria between 13 to 15 February. The title of the paper is ‘Virtual City Maker And Virtual Navigator: A Modelling And Visualisation Solution For The Creation And Display Of Mobile 3D Virtual Cities’ and presents a complete procedural 3D modelling and visualisation solution for mobile devices. Nevertheless there is a growing need for computer-based, photorealistic visualizations of 3D urban environments in many areas including environmental planning, engineering, telecommunications, architecture, gaming, 3D city information systems and even homeland security. Therefore, the procedural modelling and mobile visualisation of virtual cities in 3D is a topic not only computer graphics research but also other fields such as GIS or photogrammetry have focused on for a number of years.

The modelling tool is based on scripting algorithms allowing for both the automatic and also semi-automatic creation of photorealistic quality virtual urban content. The input data used include the combination of aerial images, GIS data, 2D ground maps and terrestrial photographs grouped with a user-friendly customized interface which permits for the automatic and interactive generation of large-scale, accurate, georeferenced and fully-textured 3D virtual city content. This content can be specially optimized for use with mobile devices but also with navigational tasks in mind. Moreover, a user-centred mobile virtual reality (VR) visualisation and interaction tool operating on PDAs called Virtual Navigator specifically designed for pedestrian navigation is also presented. This engine supports the import and display of various navigational file formats (2D and 3D) and also includes a comprehensive front-end user-friendly graphical user interface providing immersive virtual 3D navigation to a wide range of users.

A draft version of the paper can be downloaded from here.

23 February 2008

Wizkid Robot

Wizkid is part of MoMA's Design and the Elastic Mind exhibit, running from February 24 to May 12, 2008. This unusual device is the result of a collaboration between an engineer, Fréderic Kaplan and an industrial designer, Martino d'Esposito. Kaplan, a researcher at EPFL (Ecole Polytechnique Federale de Lausanne), worked ten years for Sony, creating "brains" for entertainment robots. Wizkid looks like a computer with a neck. But there the similarities with the familiar personal computer end. Wizkid isn't static. The screen on the mobile neck moves about like a head, and it's trained to hone in on human faces. Once it sees you, Wizkid focuses on you and follows your movement. Unlike a computer, which requires you to stop what you're doing and adapt your behavior and social interactions in order to use it, Wizkid blends into human space. There's no mouse and no keyboard. You don't touch anything. There's no language getting in the way.

On Wizkid's screen you see yourself surrounded by a "halo" of interactive elements that you can simply select by waving your hands. If you move away or to one side, Wizkid adapts itself to you, not the other way around. If you're with a friend, Wizkid finds and tracks both of you and tries to figure out your relationship, expressing surprise, confusion or enjoyment when it gets your response. Wizkid's inventors see their creation as playing a new and important role in the transitional world we currently inhabit. Unlike a real kid, whose learning curve can be frustratingly hard to influence, Wizkid learns as much as you want it to about you and your world, and interacts with you at a level that you define. Want to use this device simply as a tool" Adjust a slider on its side and Wizkid will follow you without making any suggestions.

More information:


20 February 2008

Why Men Enjoy More Video Games?

According to a 2007 Stanford University survey, young males are two to three times more likely than females to feel addicted to video games, such as the Halo series so popular in recent years. Despite the popularity of video and computer games, little is known about the neural processes that occur as people play these games. And no research had been done on gender-specific differences in the brain's response to video games. Researchers designed a game involving a vertical line (the "wall") in the middle of a computer screen. When the game begins, 10 balls appear to the right of the wall and travel left toward the wall. Each time a ball is clicked, it disappears from the screen. If the balls are kept a certain distance from the wall, the wall moves to the right and the player gains territory, or space, on the screen. If a ball hits the wall before it's clicked, the line moves to the left and the player loses territory on the screen. During this study, 22 young adults (11 men and 11 women) played numerous 24-second intervals of the game while being hooked up to a functional magnetic resonance imaging, or fMRI, machine. fMRI is designed to produce a dynamic image showing which parts of the brain are working during a given activity. Study participants were instructed to click as many balls as possible; they weren't told that they could gain or lose territory depending on what they did with the balls.

All participants quickly learned the point of the game, and the male and female participants wound up clicking on the same number of balls. The men, however, wound up gaining a significantly greater amount of space than the women. That's because the men identified which balls - the ones closest to the "wall" - would help them acquire the most space if clicked. After analyzing the imaging data for the entire group, the researchers found that the participants showed activation in the brain's mesocorticolimbic center, the region typically associated with reward and addiction. Male brains, however, showed much greater activation, and the amount of activation was correlated with how much territory they gained. However, this wasn't the case with women. Three structures within the reward circuit - the nucleus accumbens, amygdala and orbitofrontal cortex - were also shown to influence each other much more in men than in women. And the better connected this circuit was, the better males performed in the game. This research also suggests that males have neural circuitry that makes them more liable than women to feel rewarded by a computer game with a territorial component and then more motivated to continue game-playing behavior. Based on this, it makes sense that males are more prone to getting hooked on video games than females.

More information:


18 February 2008

M3G version 2.0

M3G (Mobile 3D Graphics API for Java, a.k.a. JSR-184) is an easy-to-use yet powerful 3D graphics API for mobile Java. It is primarily a retained-mode API: a model of the 3D scene is maintained inside the API, and individual objects and rendering assets can be inserted and manipulated via the API functions. The low-level rendering model is based on OpenGL ES, with the higher-level functionality layered on top. Compared with 2D bitmap graphics, 3D allows the games to pack larger numbers of smoother and richer animations into the same amount of memory. It also improves the sense of depth and perspective and enables cinematic camera controls, as illustrated in the adjacent figure. Most of the processing time in an interactive 3D game is usually spent in the core 3D routines that execute in optimized native code or use dedicated 3D hardware. Implementing 3D rendering in Java alone would be prohibitively slow, which is why M3G was designed.

At the lowest level, M3G deals with concepts similar to those in OpenGL ES: vertex buffers, textures, light sources, materials, and transformation matrices. These are the building blocks for higher-level objects and scene graphs. Vertex and index buffers can be combined into Mesh objects; textures, materials, and other rendering parameters form Appearance objects for shading; Group nodes allow logical grouping and hierarchic transformation of scene elements. The higher-level features, enabled by the scene-graph approach, include functionality commonly required in games and other interactive 3D graphics applications. The aim in designing M3G was to build in common functionality that most applications will need in any case, without being too application-specific. This reduces application size and improves developer productivity. It also improves overall application performance, because raising the abstraction level allows M3G to batch-process entire 3D scenes in native code.

More information:


17 February 2008

Realistic Hair In Computer Animation

Reproducing this effect in computer graphics has always been a challenge. Computers can create three-dimensional structures resembling hair, but the process of rendering, in which the computer figures out how light will be reflected from those structures to create an image, requires complex calculations that take into account the scattering between hairs. Current methods use approximations that work well for dark hair and passably for brown, but computer-generated blondes still don't look like they're having more fun. But now Cornell researchers have developed a new and much quicker method for rendering hair that promises to make blond (and other light-colored) hair more realistic.

The problem is that light travelling through a mass of blond hair is not only reflected off the surfaces of the hairs, but passes through the hairs and emerges in a diffused form, from there to be reflected and transmitted some more. The only method that can render this perfectly is "path-tracing," in which the computer works backward from each pixel of the image, calculating the path of each ray of light back to the original light source. Since this require hours of calculations, computer artists resort to approximations. The result, in a test rendering of a swatch of blond hair, appears almost identical to a rendering by the laborious path-tracing method. Path tracing for the test required 60 hours of computation, while the new method took only 2.5 hours, the researchers report.

More information:


06 February 2008

N-Gage Gaming Platform

Mobile giant Nokia has begun a second assault on handheld gaming, with the launch of the N-Gage platform. The company has shifted its approach, from focusing on a dedicated games system to titles which can be played on a range of Nokia devices. At launch, only owners of the N81 can use the service but the firm plans to roll it out to other N-series phones. N-Gage has the support of developers such as EA, Gameloft and Vivendi, who are making titles for the platform. Nokia first launched its N-Gage handset in 2003, designed to compete with the highly-successful Nintendo GameBoy. But the device was criticised for its design and poor sales led to the phone being quickly overhauled with new versions, including the QD.

Nokia persevered with the device, in different incarnations, and had sold more than two million so-called game decks by August 2007. Last year the firm said it was concentrating on N-Gage as a platform for titles, rather than as a specific handset. N-Gage will be competing for a slice of an industry worth more than $1bn. N-Gage is now a software download, which once installed acts as a gateway to games. Nokia has said it is now easier to find and download games using N-Gage, and customers can also try before you buy. Nokia is looking at ways to create games which combined connectivity, with GPS and web 2.0 applications.

More information:


05 February 2008

PDAs Prod Elderly to Exercise

In a study that appears in the February issue of the American Journal of Preventive Medicine, it was demonstrated that specially programmed PDAs, or personal digital assistants, can prod middle-aged and older Americans - the most sedentary segment of the U.S. population - into increasing their physical activity levels. The researchers invited the public to participate in this new study through local mass-media outlets, like the Palo Alto Daily News and the San Jose Mercury News. Out of 69 callers who were screened for eligibility, 37 were invited to be study participants and randomly assigned to an eight-week program in which they either received a Dell Axim X5 PDA, or traditional handouts related to physical activity. The Dell Axim X5, chosen for its large-sized, easy-to-read screen and good contrast, was fitted with a program that asked participants approximately three minutes' worth of questions. Among the questions: Where are you now? Who are you with? What barriers did you face in doing your physical activity routine? The device automatically beeped once in the afternoon and once in the evening; if participants ignored it the first time, it beeped three additional times at 30-minute intervals. During the second (evening) session, the device also asked participants about their goals for the next day.

With this program, participants could set goals, track their physical activity progress twice a day and get feedback on how well they were meeting their goals. After eight weeks, the researchers found that while participants assigned to the PDA group devoted approximately five hours each week to exercise, those in the control group spent only about two hours on physical activities-in other words, the PDA users were more than twice as active. One surprise was the participants' positive response to the program's persistence. The PDA users liked the three additional "reminder" beeps that went off if they failed to respond to the first one. In fact, almost half of them wound up responding to the PDA only after being beeped for the fourth time. The study targeted people interested in health changes, but with little if any knowledge of portable computer devices. During the eligibility screening, 93 percent said they had never used a PDA before. So there could have been difficulties in grasping the technology, or participants refusing to deal with it and giving up entirely. This, however, did not turn out to be a problem.

More information: