28 August 2008

Personalised Maps Show Street View

Finding your way across an unfamiliar city is a challenge for most people's sense of direction. Software that generates personalised maps showing only relevant information, and carefully chosen views of selected landmarks, could make disorientation a thing of the past. Thanks to online services such as Google Maps and Microsoft Live maps now contain more information than ever. It is possible to toggle between a regular schematic, a ‘bird's eye view’ that uses aerial photos and even three-dimensional representations of a city's buildings. Those multiple perspectives can help users locate themselves more accurately. Direct routeGrabler's team at Berkeley, working with researchers at ETH Zurich, used a perceptual study of San Francisco from the 1960s to help identify which landmark buildings to include on a map of the city. They found that landmark buildings came in three varieties. These categories were used to give each building in San Francisco a rating on the basis of its score in each of the three categories.
When generating a map, the user can choose to display those landmarks in one of two ways. They can be displayed as straightforward three-dimensional depictions, but that masks the buildings' facades. To provide the user with more information, the team added an oblique projection option, which shows all visible sides of the building. Although the buildings look distorted compared with a regular three-dimensional depiction, it is possible to see all the facades a building presents to the street, including both facades for a building on a corner. But buildings depicted this way can hide some streets. This is avoided by widening the map's roads, and shrinking the height of the buildings so that roads remain visible behind even tall buildings. The user's final decision is to choose the purpose of their map. On a shopping map, all the major shops become semantically important and are included on the map. A food map, by contrast, will show fewer shops but more of the city's restaurants.

More information:

http://technology.newscientist.com/channel/tech/dn14562-personalised-maps-s%20%20how-the-view-from-the-street-.html

27 August 2008

Sign Language Over Cell Phones

A group at the University of Washington has developed software that for the first time enables deaf and hard-of-hearing Americans to use sign language over a mobile phone. UW engineers got the phones working together this spring, and recently received a National Science Foundation grant for a 20-person field project that will begin next year in Seattle. This is the first time two-way real-time video communication has been demonstrated over cell phones in the United States. Since posting a video of the working prototype on YouTube, deaf people around the country have been writing on a daily basis. For mobile communication, deaf people now communicate by cell phone using text messages. Video is much better than text-messaging because it's faster and it's better at conveying emotion. Low data transmission rates on U.S. cellular networks have so far prevented real-time video transmission with enough frames per second that it could be used to transmit sign language.

Communication rates on United States cellular networks allow about one tenth of the data rates common in places such as Europe and Asia (sign language over cell phones is already possible in Sweden and Japan). The current version of MobileASL uses a standard video compression tool to stay within the data transmission limit. Future versions will incorporate custom tools to get better quality. The team developed a scheme to transmit the person's face and hands in high resolution, and the background in lower resolution. Now they are working on another feature that identifies when people are moving their hands, to reduce battery consumption and processing power when the person is not signing. Mobile video sign language won't be widely available until the service is provided through a commercial cell-phone manufacturer.

More information:

http://www.sciencedaily.com/releases/2008/08/080821164609.htm

http://mobileasl.cs.washington.edu/index.html

http://youtube.com/watch?v=FaE1PvJwI8E

25 August 2008

High Res Images for Video Games

The images of rocks, clouds, marble and other textures that serve as background images and details for 3D video games are often hand painted and thus costly to generate. A breakthrough from a UC San Diego computer science undergraduate now offers video game developers the possibility of high quality yet lightweight images for 3D video games that are generated "on the fly" and are free of stretch marks, flickering and other artifacts. The 2008 SIGGRAPH paper marks an important improvement over Perlin noise, an established technique in which small computer programs create many layers of noise that are piled on top of each other. The layers are then manipulated -- like layers of paint on a canvas -- in order to develop detailed and realistic textures such as rock, soil, cloud, water and marble that serve as background images and details for 3D video games.

The new approach also eliminates the need to store the textures as huge images that take up valuable memory. Instead the textures are generated by computer programs on the fly every time an image is rendered. Both the stretch marks and the flickering in 3D video game backgrounds often stem from the same technical issue: choosing what color to make individual pixels. They mapped elliptical areas of background images back to circular pixels and found that their technique yielded higher quality background images with less stretching and other distortions. The reason elliptical shapes are a better fit for circular pixels in backgrounds for 3D video games goes back to basic geometry: when a cone that extends from a circular pixel intersects with the background of a 3D video game scene, the region of the cone that hits the background is an ellipse rather than a circle.

More information:

http://www.physorg.com/news137771248.html

22 August 2008

Archaeologists Reconnect Fragments

For several decades, archaeologists in Greece have been painstakingly attempting to reconstruct wall paintings that hold valuable clues to the ancient culture of Thera, an island civilization that was buried under volcanic ash more than 3,500 years ago. Researchers from Princeton University report on their work in a paper to be presented Aug. 15 in Los Angeles at the Association of Computing Machinery's annual SIGGRAPH conference, widely considered the premier meeting in the field of computer graphics. To design their system, the Princeton team collaborated closely with the archaeologists and conservators working at Akrotiri, which flourished in the Late Bronze Age, around 1630 B.C.E. Reconstructing an excavated fresco, mosaic or similar archaeological object is like solving a giant jigsaw puzzle, only far more difficult. The original object often has broken into thousands of tiny pieces -- many of which lack any distinctive color, pattern or texture and possess edges that have eroded over the centuries. As a result, the task of reassembling artifacts often requires a lot of human effort, as archaeologists sift through fragments and use trial and error to hunt for matches. While other researchers have endeavored to create computer systems to automate parts of this undertaking, their attempts relied on expensive, unwieldy equipment that had to be operated by trained computer experts. The Princeton system, by contrast, uses inexpensive, off-the-shelf hardware and is designed to be operated by archaeologists and conservators rather than computer scientists. The system employs a combination of powerful computer algorithms and a processing system that mirrors the procedures traditionally followed by archaeologists. In 2007, a large team of Princeton researchers made a series of trips to Akrotiri, initially to observe and learn from the highly skilled conservators at the site, and later to test their system. During a three-day visit to the island in September 2007, they successfully measured 150 fragments using their automated system. Although the system is still being perfected, it already has yielded promising results on real-world examples.

The setup used by the Princeton researchers consists of a flatbed scanner (of the type commonly used to scan documents and which scans the surface of the fragment), a laser rangefinder (essentially a laser beam that scans the width and depth of the fragment) and a motorized turntable (which allows for precise rotation of the fragment as it is being measured). These devices are connected to a laptop computer. By following a precisely defined and intuitive sequence of actions, a conservator working under the direction of an archaeologist can use the system to measure, or acquire, up to 10 fragments an hour. The flatbed scanner first is used to record several high-resolution color images of the fragment. Next, the fragment is placed on the turntable, and the laser rangefinder measures its visible surface from various viewpoints. The fragment is then turned upside down and the process is repeated. Finally, computer software, or algorithms, undertake the challenging work of making sense of this information. The Princeton researchers have dubbed the software that they have developed ‘Griphos’, which is Greek for puzzle or riddle. One algorithm aligns the various partial surface measurements to create a complete and accurate three-dimensional image of the piece. Another analyzes the scanned images to detect cracks or other minute surface markings that the rangefinder might have missed. The system then integrates all of the information gathered -- shape, image and surface detail -- into a rich and meticulous record of each fragment. Once it has acquired an object's fragments, the system begins to reassemble them, examining a pair of fragments at a time. Using only the information from edge surfaces, it acts as a virtual archaeologist, sorting through the fragments to see which ones fit snugly together. Analyzing a typical pair of fragments to see whether they match is very fast, taking only a second or two. However, the time needed to reassemble a large fresco may be significant, as the system must examine all possible pairs of fragments. To make the system run faster, the researchers are planning to incorporate a number of additional cues that archaeologists typically use to simplify their searching for matching fragments. These data include information such as where fragments were found, their pigment texture and their state of preservation.

More information:

http://www.sciencedaily.com/releases/2008/08/080815130417.htm

19 August 2008

Hollywood Hair Using CGI

University of California, San Diego today announced a new method for accurately capturing the shape and appearance of a person’s hairstyle. The results closely match the real hairstyles and can be used for animation. This level of realism for animated hairstyles is one step closer to the silver screen, thanks to new research being presented at SIGGRAPH, one of the most competitive computer graphics conferences in the world. The breakthrough is a collaboration between researchers at UC San Diego, Adobe Systems Incorporated (Nasdaq: ADBE) and the Massachusetts Institute of Technology. The computer graphics researchers captured the shape and appearance of hairstyles of real people using multiple cameras, light sources and projectors. The computer scientists then created algorithms to ‘fill in the blanks’ and generate photo-realistic images of the hairstyles from new angles and new lighting situations.

From here, the computer scientists found a new way to precisely simulate how light would reflect off each strand of hair. The result is the ability to create photo-realistic images of the hairstyle from any angle. The automated system even creates realistic highlights. This process of creating new images based on data from related images is called interpolation. By determining the orientations of individual hairs, the researchers can realistically estimate how the hairstyle will shine no matter what angle the light is coming from. The new computational approach can be used for much more than generating images of a hairstyle based on what the style looks like from other angles. One possible extension of this work: making an animated character’s hair realistically blow in the wind. This could be done because the researchers also developed a way to calculate what each individual hair fiber that lies between the surface and the scalp is doing.

More information:

http://www.sciencedaily.com/releases/2008/08/080813095716.htm

15 August 2008

Japanese Satellite Rides Skyward

Yesterday, the first wholly home-designed and built telecommunications satellite for Japan has gone safely into orbit. The Superbird-7 spacecraft went up on an Ariane rocket from Europe's Kourou launch facility in French Guiana. Built by the Mitsubishi Electric Corporation, the satellite will deliver TV and other services to Japan and the wider Asia-Pacific region. Currently, all Japanese broadcasters and commercial telecoms carriers use space platforms made in the US. As is customary for an Ariane, the latest mission delivered two satellites into orbit.

The second was the AMC-21 spacecraft, a TV and internet platform whose services will be focussed on North and Central America. The rocket left the ground at 1744 local time (2044 GMT) and released the Superbird-7 just under half-an-hour later, with the AMC-21 following shortly afterwards. This flight was the fifth Ariane mission of 2008. Two further flights are planned in the coming months - making this year's schedule the busiest since the vehicle's commercial introduction in 1999.

More information:

http://news.bbc.co.uk/2/hi/science/nature/7562213.stm

13 August 2008

Nokia N96

The Nokia N96 doesn’t have a touch screen like the iPhone and it feels less well built. But there’s not much else here to disappoint. If you’re an everyday digital camera user, this phone really will be a fitting replacement for your compact. That’s because the formidable N96 boasts a five megapixel camera complete with Carl Zeiss Tessar lens. Without doubt it’s the feature I’ve been using most. The N96 is capable of producing bumper sized shots too - up to a resolution of 2592 x 1944 pixels. The video mode of the camera is top-dollar too. A super responsive microphone is so acute it will literally pick up a pin dropping, while playback of captured footage is complemented by the dinky little twin speakers located on each end of the phone.

It’s the multimedia angle that really captures the imagination. MPEG-4, Windows Media Video and Flash Video are all supported and USB 2.0 connectivity, WLAN and HSDPA support mean file transfer levels are very decent. With 16 gigabytes of internal memory it’ll hold a decent wedge of content – up to 40 hours according to Nokia, and even that is expandable with a microSDHC card that’ll boost total capacity to a bulging 24GB. It also has an integrated DVB-H (Digital Video Broadcasting – Handheld) receiver that allows you to view live TV. Gaming is another aspect of the N96 that impresses. Nokia’s own N-Gage made-for-mobile gaming facility promises much, especially when it comes to the tackling the likes of Fifa 08 or Asphalt 3: Street Rules. The media keys on the stubby side of the dual-case turn your phone into an instant gaming machine.

More information:

http://tech.uk.msn.com/features/article.aspx?cp-documentid=9141699

02 August 2008

Geological Mapping Gets Joined Up

The world's geologists have dug out their maps and are sticking them together to produce the first truly global resource of the world's rocks. The OneGeology project pools existing data about what lies under our feet and has made it available on the web. Led by the British Geological Survey (BGS), the project involved geologists from 80 nations. Between 60% and 70% of the Earth's surface is now available down to the scale of 1:1,000,000. With that resolution, people can focus in on a small part of their city. Eventually, people will be able to get up close and see the rocks beneath their house. Project organisers explained that what is novel about this project is that it takes local geological information and makes it global.

The resource displays geological information with the use of a "virtual globe", in much the same way as Google Earth now presents satellite images. Eventually, it is hoped that the geological maps will be detailed enough to help companies find the Earth's exploitable resources, such as minerals and oil. The developers of the system added that it would also help scientists and engineers learn more about the Earth and its environmental changes. At present, most of the globe is available at the scale of 1:1,000,000. The project is the first global geological map that is constantly updated, so the resolution will only get better. In France and Britain, users of the OneGeology resource can already look at the rocks that lie directly beneath their feet in 3D.

More information:

http://news.bbc.co.uk/2/hi/science/nature/7535379.stm