30 July 2008

Computer vs Human Visual System

Researchers of Cognitive Science have begun to develop a technique to turn our eyes and visual system into a programmable computer. His findings are reported in the latest issue of the journal Perception. Harnessing the computing power of our visual system requires visually representing a computer program in such a way that when an individual views the representation, the visual system naturally carries out the computation and generates a perception. Ideally, we would be able to glance at a complex visual stimulus (the software program), and our visual system (the hardware) would automatically and effortlessly generate a perception, which would inform us of the output of the computation.

Researchers used simple drawings of unambiguous boxes as inputs for his visually represented digital circuits. The positioning and shading of each box indicates which direction the image is tilted. They also created visual representations of the logic gates NOT, which flips a circuit's state from 0 to 1 or vice versa; OR, which outputs 1 if one or both inputs are 1; and AND, which outputs 1 only if both inputs are 1. By perceptually walking through the proposed visual representation of a digital circuit, from the inputs downward to the output, our visual system will naturally carry out the computation so that the "output" of the circuit is the way we perceive the final box to tilt, and thus a one or zero.

More information:


23 July 2008

Customising User Interfaces

Off-the-shelf designs are especially frustrating for the disabled, the elderly and anybody who has trouble controlling a mouse. A new approach to design, developed at the University of Washington, would put each person through a brief skills test and then generate a mathematically-based version of the user interface optimized for his or her vision and motor abilities. A paper describing the system, which for the first time offers an instantly customizable approach to user interfaces, was presented July 15 in Chicago at a meeting of the Association for the Advancement of Artificial Intelligence. Tests showed the system closed the performance gap between disabled and able-bodied users by 62%, and disabled users strongly preferred the automatically generated interfaces. The system, called Supple, begins with a one-time assessment of a person's mouse pointing, dragging and clicking skills. A ring of dots appears on the screen and as each dot lights up, the user must quickly click on it. The task is repeated with different-sized dots. Other prompts ask the participant to click and drag, select from a list, and click repeatedly on one spot. Participants can move the cursor using any type of device.

The test takes about 20 minutes for an able-bodied person or up to 90 minutes for a person with motor disabilities. An optimization program then calculates how long it would take the person to complete various computer tasks, and in a couple of seconds it creates the interface that maximizes that person's accuracy and speed when using a particular program. Researchers tested the system last summer on six able-bodied people and 11 people with motor impairments. The resulting interfaces showed one size definitely did not fit all. A man with severe cerebral palsy used his chin to control a trackball and could move the pointer quickly but spastically. Based on his skills test, Supple generated a user interface where all the targets were bigger than normal, and lists were expanded to minimize scrolling. By contrast, a woman with muscular dystrophy who participated in the study used both hands to move a mouse. She could make very precise movements but moved the cursor very slowly and with great effort because of weak muscles. Based on her results, Supple automatically generated an interface with small buttons and a compressed layout. The program could also be used to create interfaces that can adapt to different sizes of screen, for example on handheld devices.

More information:


20 July 2008

Mobile Future

Sales of smartphones are expected to overtake those of laptops in the next 12 to 18 months as the mobile phone completes its transition from voice communications device to multimedia computer. Convergence has been the Holy Grail for mobile phone makers, software and hardware partners, as well as consumers, for more than a decade. And for the first time the rhetoric of companies like Nokia, Samsung and Motorola, who have boasted of putting a multimedia computer in your pocket, no longer seems far fetched. Last year Nokia sold almost 200m camera phones and about 146m music phones, making it the world's biggest seller of digital cameras and MP3 players. In the coming year the firm predicts it will sell 35 million GPS-enabled phones as personal navigation becomes the latest feature to be assimilated into the mobile phone.
Symbian's operating system shipped on 188 million phones last year and a third of those came with GPS. Convergence is being driven by a combination of software, services and hardware. The first phones powered by a chip running at 1Ghz will hit the market later this year, seven years after the first desktop chip broke the gigahertz barrier. Qualcomm's 1Ghz Snapdragon chipset will debut inside a number of handsets, including some from Samsung and HTC. As well as raw horsepower Snapdragon also features a dedicated application processor, as well as the ability to handle 12 megapixel digital photos and up to 720p high definition video imaging. Finally, 3D graphics acceleration is becoming standard on many of today's mobile phones and specialists like Nvidia have joined the market.

More information:


12 July 2008

IV08 Article

A few days ago I have presented a paper I have co-authored with colleagues from Cogent at Coventry University in the 12th International Conference on Information Visualisation 2008 (IV08). Environmental monitoring brings many challenges to wireless sensor networks: including the need to collect and process large volumes of data before presenting the information to the user in an easy to understand format.

The paper presented SensAR, a prototype augmented reality interface specifically designed for monitoring environmental information. The input of our prototype is sound and temperature data which are located inside a networked environment. Participants can visualise 3D as well as textual representations of environmental information in real-time using a lightweight handheld computer.

A draft version of the paper can be downloaded from here.

06 July 2008

IV08 Conference

Information Visualisation 2008 (IV08) international conference, which is sponsored by IEEE, aims to focus on the interdisciplinary methods and affiliated research done among various science disciplines, medicine, engineering, media and commerce. This three day event will focus on the research and developments conjured to meet the roaring demand of today’s "Information Transfer" through the medium of computing, accentuating on the linkage that shapes academia and industry.

The goal of IV08 is to stimulate views and provide a forum where researchers and practitioners can discuss the latest developments linked to Information Visualisation. IV08 is inviting commercial organisations to display and demonstrate their related hardware and software products in our exhibition during the conference. This conference will be of significant benefit for researchers, engineers, programme mangers, marketing managers who need to be aware of the latest products and services in the dynamic area of computing, especially, Information Visualisation and Graphics.

More information:


04 July 2008

Google 'Street View'

Google's plans to launch a mapping tool in the UK could be referred to the Information Commissioner. Street View matches photos of locations to maps, including passers-by who were captured as the photograph was taken. Privacy International, a UK rights group, believes the technology breaks data protection laws. Street View has already been launched in the US and includes photos of streets in major American cities. Photographing of areas in the UK, including London, is believed to have started this week.

Some individuals in the US have complained about their images being used and Google has said it removed their presence on request. The company has said it had begun to trial face blurring technology, using an algorithm that detects human faces in photographs. In the US it is legal to take photos of people on public streets. However, because Street View is being used for commercial ends anyone in the UK who appears in the photo needs to grant his or her consent. On the other hand, Google said that it complies with all local laws.

More information:


03 July 2008

V-Government Workshop

The workshop is aimed at those interested in how government should operate in a post Web2.0 world. Virtual worlds such as Second Life, Habbo Hotel and World of Warcraft pose opportunities and challenges for governments, mirror worlds and augmented realty add even more dimensions to the choices of engagement that government faces in the emerging future.

Does v-Government offer new and exciting way to provide government services, engage citizens, innovate policy making, and foster e-democracy? or is it yet another distraction from the true business of government?

More information:


02 July 2008

Brainwave Interaction in Second Life

On 7th June 2008, Keio University succeeded in the world’s first demonstration experiment with the help of a disabled person to use brainwave to chat and stroll through the virtual world. The research group applied the technology “to operate the computer using brain images released last year and succeeds in enabling a disabled person suffering muscle disorder (41 year old male) to stroll through “Second Life®*”, a 3D virtual world on the Internet, to walk towards the avatar of a student logged in at Keio University located 16km from the subject’s home, and to have a conversation with the student using the “voice chat” function.

This demonstration experiment opens a new possibility for motion-impaired people in serious conditions to communicate with others and to engage in business. This experiment is a marriage of leading-edge technologies in brain science and the Internet, and is the world’s first successful example to meet with people and have conversation in the virtual world. This research is an achievement of the Biomedical Research Project at Keio University, a collaboration project of the Faculty of Science and Technology, Tsukigase Rehabilitation Center and the Department of Rehabilitation Medicine of the School of Medicine. This experiment was demonstrated at the 17th Keio University Faculty of Science and Technology Open Lecture on 7th June 2008.

More information: