26 February 2014

ERWIN the Friendly Robot

A ‘friendly robot’ called ERWIN will be used to help scientists understand how more realistic long-term relationships might be developed between humans and androids. ERWIN (Emotional Robot with Intelligent Network) is the brainchild of Dr John Murray, from the School of Computer Science, University of Lincoln, UK.


Researchers try to find out how some of the human-like thought biases in robot characteristics affect the human-robot relationship. It is hoped the research will not only help scientists to understand and develop better but that it could also help to inform how relationships are formed  by children with autism, Asperger syndrome or attachment disorder. 

More information:

24 February 2014

Using Holograms to Improve Electronic Devices

Holography is a technique based on the wave nature of light which allows the use of wave interference between the object beam and the coherent background. It is commonly associated with images being made from light, such as on driver’s licenses or paper currency. However, this is only a narrow field of holography. The first holograms were designed in the last 1940s for use with electron microscopes.  A decade later, with the advent of the laser, optical holographic images were popularized. Since, other fields have significantly advanced by using wave interference to produce holograms, including acoustic holograms used in seismic applications and microwave holography used in radar systems. Holography has been also recognized as a future data storing technology with unprecedented data storage capacity and ability to write and read a large number of data in a highly parallel manner. A team of researchers from the University of California, Riverside Bourns College of Engineering and Russian Academy of Science have demonstrated a new type of holographic memory device that could provide unprecedented data storage capacity and data processing capabilities in electronic devices. 


The new type of memory device uses spin waves instead of the optical beams. Spin waves are advantageous because spin wave devices are compatible with the conventional electronic devices and may operate at a much shorter wavelength than optical devices, allowing for smaller electronic devices that have greater storage capacity. Experimental results obtained by the team show it is feasible to apply holographic techniques developed in optics to magnetic structures to create a magnetic holographic memory device. The research combines the advantages of the magnetic data storage with the wave-based information transfer. The experiments were conducted using a 2-bit magnetic holographic memory prototype device. A pair of magnets, which represent the memory elements, were aligned in different positions on the magnetic waveguides. Spin waves propagating through the waveguides are affected by the magnetic field produced by the magnets.  When spin waves interference was applied in the experiments, a clear picture was produced and the researchers could recognize the magnetic states of the magnets. All experiments were done at room temperature.

More information:

14 February 2014

Eugenides Foundation Invited Talk

On the 12th February 2014, I gave an invited talk to Eugenides Foundation at an event entitled ‘Digital Games in Education: Design, creation and use’. This event was organised in co-operation with the Eugenides Foundation, the British Council, the Institute of Communication and Computer Systems (ICCS) (National Polytechnic University), with the kind support of the National Union of Informatics Teachers.


The title of my presentation was ‘Use of Virtual and Augmented Reality in the classroom’. In particular, my talk presented how University lecturers and school teachers can make use of augmented reality and virtual environments to assist them during teaching. I presented different case studies and demonstrated experimental examples in teaching: computer graphics, GIS and IT.

More information:

10 February 2014

Attempting to Code the Human Brain

Somewhere, in a glass building several miles outside of San Francisco, a computer is imagining what a cow looks like. Its software is visualizing cows of varying sizes and poses, then drawing crude digital renderings, not from a collection of photographs, but rather from the software's ‘imagination’. The technology is the work of Vicarious FPC Inc. that is part of the rapidly expanding world of artificial intelligence. The company is weaving together bits of code inspired by the human brain, aiming to create a machine that can think like humans. Such powerful software is still several years away from being fully developed, if at all, and raises all sorts of ethical questions. 


But the potential applications—such as masterfully translating foreign languages, identifying objects in photos and directing self-driving cars through busy intersections are investing heavily in artificial intelligence. The idea of creating smarter computers based on the brain has been around for decades as scientists have debated the best path to artificial intelligence. The approach has seen a resurgence in recent years thanks to far superior computing processors and advances in computer-learning methodologies. One of the most popular technologies in this area involves software that can train itself to classify objects as varied as animals, syllables and inanimate objects.

More information:

09 February 2014

Masaryk University Invited Talk

On the 24th January 2014, I gave an invited talk to Masaryk University, Faculty of Informatics at Brno, Czech Republic. The title of my presentation was ‘Emerging Visualisation and Interaction Technologies for Virtual Environments’. The presentation provided examples including various results from various projects since 2000.

 
In particular, the talk presented an overview of my research including augmented reality, procedural modelling, brain computer interfaces and serious games. All the results indicate that visualisation and interaction technologies for virtual environments are becoming a mainstream since they can be applied very successfully for a variety of applications.

More information:

07 February 2014

The Public Eye

In 2011, the Indian government launched a massive programme to collect the iris patterns and fingerprints of all of its 1.2 billion citizens within three years. The numbers associated with the project are staggering. To date, more than 540 million people have enrolled in the optional programme, with one million more joining every day across 36,000 stations operated by 83 agencies. Each new iris pattern must be checked against every other pattern in the database to detect and prevent duplication: this equates to almost 500 trillion iris comparisons each day. Apart from its scale, what makes UIDAI (Unique Identification Authority of India) different is its purpose. It is not a security exercise or a means to control national borders, but a social development programme whose stated aim is ‘to give the poor an identity’.


The algorithms that make iris recognition possible have been developed in the University of Cambridge laboratories. The algorithms, which were patented in 1994 and have been licensed to several companies around the world over the past two decades, are still the basis of all significant iris recognition deployments. The question of whether anyone has the right to be anonymous has been debated for hundreds of years, but it is just as relevant today as it was in the 18th century. The thought of large-scale data collection by governments is a cause for concern: digital identity schemes in the UK and elsewhere have been scrapped due to questions around data protection and the right to anonymity. But in India, anonymity is a huge problem. Just 4% of Indians have a passport, and fewer than half have a bank account.

More information:

05 February 2014

Robots With Insect-Like Brains

Researchers of Freie Universität Berlin, of the Bernstein Fokus Neuronal Basis of Learning, and of the Bernstein Center Berlin and have developed a robot that perceives environmental stimuli and learns to react to them. The scientists used the relatively simple nervous system of the honeybee as a model for its working principles. To this end, they installed a camera on a small robotic vehicle and connected it to a computer. The computer program replicated in a simplified way the sensorimotor network of the insect brain. The input data came from the camera that-akin to an eye-received and projected visual information. The neural network, in turn, operated the motors of the robot wheels-and could thus control its motion direction. The outstanding feature of this artificial mini brain is its ability to learn by simple principles.

In the learning experiment, the scientists located the network-controlled robot in the center of a small arena. Red and blue objects were installed on the walls. Once the robot's camera focused on an object with the desired color (i.e. red), the scientists triggered a light flash. This signal activated a so-called reward sensor nerve cell in the artificial network. The simultaneous processing of red color and the reward now led to specific changes in those parts of the network, which exercised control over the robot wheels. As a consequence, when the robot "saw" another red object, it started to move toward it. Blue items, in contrast, made it move backwards. The scientists are now planning to expand their neural network by supplementing more learning principles. Thus, the mini brain will become even more powerful-and the robot more autonomous.

More information:

04 February 2014

AR at Rocky Mountain Arsenal

Imagine strolling along a wildlife refuge trail and finding a marker with a symbol of a bison. Pull out your smartphone or iPad and hold it up to the picture. Now look at the screen and see a 3D bison roam across the landscape. Through the magic of digital technology, visitors to the Rocky Mountain Arsenal National Wildlife Refuge (RMA) could click an app and enjoy sightings of rare or endangered animals - albeit virtual ones - in a pristine setting. Augmented reality (AR), as it's known, is gaining popularity as a way to enhance natural excursions - dinosaurs popping up in a forest, for example - or to teach engine repair, surgical procedures and other technical lessons.

  
Researchers in Digital Design in CU Denver's College of Arts & Media (CAM), teach a Design Studio 3 class where students work with nonprofits to improve their outreach through media and design. The U.S. Fish & Wildlife Service (USFWS) approached the class with the goal of raising the profile of its ‘Get Your Goose On’ program, which promotes awareness of the National Wildlife Refuge System - including the Rocky Mountain Arsenal - particularly among youth. That interactive quality, especially, prompted Design Studio 3 students to choose augmented reality apps over typical promotional materials - pamphlets, fliers and videos - to lure visitors to national wildlife refuges.

More information: