28 February 2009

Cyber Soccer Players Cloned

A team of IT scientists from the Carlos III University in Madrid (UC3M) has managed to programme clones that imitate the actions of humans playing football on a computer, according to a new article. The clones learn the players' behaviour and apply this knowledge in order to avoid their opponents and score goals. The objective of this research is to programme a player, currently a virtual one, by observing the actions of a person playing in the simulated RoboCup league. RoboCup is an international football championship (known as soccer in the U.S.) held to promote the development of artificial and robotic intelligence. The competition's promoters are trying to develop a team of totally autonomous robots able to beat the best team of human footballers by 2050. The researcher explains that there are various leagues within RoboCup, including a league of real robots, but that his team participates in the simulation league, using a software model called Robossocer. The human player plays Robosoccer as if it were a video game, and the system observes both the stimuli that the person is receiving from the screen as well as the actions he or she is carrying out on the keyboard in order to shoot or pass the ball. Later, the researchers use automatic learning techniques in order to construct this person's model of play, and this model is used to create the ‘clone agent’, which imitates the human player. The results of the study show that the cloned player is able to tackle opponents and score goals in the opposing goal, in a similar way to human players. Both the real and virtual robots in the Robocup league are normally programmed by hand by researchers, but the Spanish scientists are aiming to do this automatically.

Although they have so far managed to get the clones to a point where they can carry out "low level" actions, such as moving forward, turning and shooting, their objective is to ensure they can learn "high level" actions, such as tackling or passing the ball to the most appropriate team member. In addition, they want to give the models human cognitive capacities such as being able to remember or predict the position of the ball or an opponent. One of the fundamental ideas behind this study was that it is more interesting for a human player to challenge an opponent with the same level of skills and disadvantages, rather than playing against an adversary with robotic behaviour. This type of study falls within a field of computer science called behavioural cloning. The objective of this discipline is to construct a model for a clone agent that can learn from the behaviour of the other agent (which may be human) by observing the stimuli this agent receives and the actions it takes in response to them. The first studies on this subject showed that a system of neuronal networks can learn to drive a vehicle by observing a driver (ALVINN project), or to control a flight simulator by analysing the behaviour of a pilot. Today, the use of behavioural cloning is also being researched in Internet-based videogames, as well as in competitions such as RoboCup. The last international robotic football championship event was held in Suzhou, China, and the next one will be held this summer in Graz, Austria, alongside RoboCup Rescue, a simultaneous competition based developing robots designed to help rescue people in natural disasters.

More information:


24 February 2009

Elephant Mobile Platform

The Fraunhofer Institute for Communication Systems ESK will be premiering its Elephant platform at the Mobile World Congress in Barcelona. The Elephant research project makes it extremely easy to develop programs for mobile devices. Existing information is transferred to the mobile platform per drag and drop and by using a variety of templates. This allows the developer of the mobile application to focus on the content, because Elephant processes text, audio and video and can define different information for a variety of devices and connections. The researchers are also relying on Web 2.0 technology to initially enable interactive collaboration via tagging and evaluation of the templates. Mobile phones have become so popular that nearly everybody owns one. That makes it an ideal platform for disseminating information. Still, the programming of mobile device applications requires proven expertise. That includes separate programs for each type of device. The result is that the development of mobile applications is time-consuming and costly. Fraunhofer ESK researchers aim to help solve this problem with the Elephant platform. Elephant is based on web technologies that make it extremely easy to compile and prepare existing information so that it can be used on mobile devices. Developers only need to understand their content and know how they want it disseminated.

The elements are then allocated per drag and drop. The application is given a structure through multiple shortcuts and process templates. Per mouse click, a packet is generated that prepares the content for use on different mobile devices. The mobile device requires the one-time installation of an application from Fraunhofer ESK, which can interpret all Elephant-based applications. Elephant developers are in a good position to respond to the user's situation when creating mobile applications. When information is distributed to a mobile phone, it is extremely difficult to consider the situation that the user currently finds himself in. If he is waiting at a bus stop, he can read text. If he is walking or jogging through a park, he needs the information in audio format. The developer can offer alternative templates and content for these different scenarios. The Elephant application then selects the appropriate situation. Fraunhofer ESK researchers are also drawing on so-called presence information, which can be set by the user or automatically recognized by the system. The result is that the provided information matches the current situation. For the content - Elephant processes text and XML formats as well as diverse image, audio and video files - the researchers created ready-to-use templates. The author defines what the application will eventually look like by selecting a template. The templates can be enhanced by the developer and made available to other users in a Web 2.0 environment. As a next step, researchers will integrate reputation mechanisms that can be used to evaluate the templates.

More information:


23 February 2009

Educational Cell Phones in Classroom

Educational software for cell phones, a suite of tools developed at the University of Michigan, is being used to turn smart phones into personal computers for students in two Texas classrooms. The Mobile Learning Environment includes programs that let students map concepts, animate their drawings, surf relevant parts of the Internet and integrate their lessons and assignments. It also includes mini versions of Microsoft Word and Excel. It is currently licensed to 40,000 users around the world for larger palm-sized computers. Cell phones change the game, though. Cell phones can be powerful computers. They can do just about everything laptops can do for a fraction of the price. And many students are bringing them to school anyway.

About half of the students in his class had phones before the project started. The project equips 53 students in two fifth-grade classes at Trinity Meadows Intermediate School with a smart phone of their own to use around-the-clock for the rest of the school year. Students can't text message or make calls with them. But they can use the cameras, mp3 players, calendars, calculators and educational software. The school district is examining several aspects of student learning with these devices. They'll determine whether listening to recordings of texts enhances at-risk students' reading comprehension. They are studying students' technological savvy before and after the project. The teachers involved will also teach responsible and appropriate use of these phones.

More information:


21 February 2009

Gadget Reads Minds From Grip

The functions of previously separate gadgets like cameras, phones, and music players have come together into single devices in recent years. But juggling all of those functions in one product with multiple personalities is not simple, and confusing interfaces plague many big-selling gadgets. But a new prototype that is able to predict what function its user wants from the way it is manipulated shows a more intuitive way to tackle the problem. The ideal device would be a generic block, like a bar of soap that knew the user's intent and could change its interface accordingly. A basic version of this concept is already built into a handful of portable gadgets. Some smartphones automatically dim the screen when they sense they have been swung against a person's ear during a call. Researchers have created a ‘bar of soap’ device, with an LCD screen front and rear. It contains a three-axis accelerometer to measure its motion in 3D, and 72 sensors across its surface to track the position of the user's fingers.

The researchers tested their prototype on 13 users who were asked to pick up several times, holding it each time in turn as if it were a remote control, PDA, camera, games controller, or mobile phone. By analysing the output from the sensors, the team spotted patterns in the way the different users held the gadget, and their grip gave clues about how they expected the device to perform (see image, right). Those results were used to program the soap bar to guess what was expected of it and respond appropriately by presenting an interface tailored for that function – when held as a camera; the LCD screens display a camera mode. For the best results the device has to be trained to a specific person, "there are variations across users. If trained on one person the device correctly ‘guesses’ which mode to enter 95% of the time. That figure drops to around 70% for the general population.

More information:


15 February 2009

Sensors Monitor Elderly People

Increasingly, many older people who live alone are not truly alone. They are being watched by a flurry of new technologies designed to enable them to live independently and avoid expensive trips to the emergency room or nursing homes. An elderly person discovered the power of a system called eNeighbor when she fell to the floor of her Philadelphia apartment late one night without her emergency alert pendant and could not phone for help. A wireless sensor under the elderly person’s bed detected that she had gotten up. Motion detectors in her bedroom and bathroom registered that she had not left the area in her usual pattern and relayed that information to a central monitoring system, prompting a call to her telephone to ask if she was all right. When she did not answer, that incited more calls — to a neighbor, to the building manager and finally to 911, which dispatched fire-fighters to break through her door. She had been on the floor less than an hour when they arrived. Technologies like eNeighbor come with great promise of improved care at lower cost and the backing of large companies like Intel and General Electric.

But the devices, which can be expensive, remain largely unproven and are not usually covered by the government or private insurance plans. Doctors are not trained to treat patients using remote data and have no mechanism to be paid for doing so. And like all technologies, the devices — including motion sensors, pill compliance detectors and wireless devices that transmit data on blood pressure, weight, oxygen and glucose levels — may have unintended consequences, substituting electronic measurements for face-to-face contact with doctors, nurses and family members. Stories like this one show the potential of relatively simple devices to provide comfort and independence to an aging population that is quickly outgrowing the resources of doctors, nurses, hospitals and health care dollars available to it. The cost for the above basic system, supplied by a health care provider called New Courtland as part of a publicly financed program, is about $100 a month, far less than a nursing home, where the costs to taxpayers can exceed $200 a day. In the two years the elderly person that had the system, has fallen three times and been stuck once in the bathtub, each time unable to call for help without it.

More information:


12 February 2009

Robots Start To Evolve

LIVING creatures took millions of years to evolve from amphibians to four-legged mammals - with larger, more complex brains to match. Now an evolving robot has performed a similar trick in hours, thanks to a software ‘brain’ that automatically grows in size and complexity as its physical body develops. Existing robots cannot usually cope with physical changes - the addition of a sensor or new type of limb, say - without a complete redesign of their control software, which can be time-consuming and expensive. As animals evolved, additions of small groups of neurons on top of existing neural structures are thought to have allowed their brain complexity to increase steadily, keeping pace with the development of new limbs and senses. In the same way, robot's brain assigns new clusters of "neurons" to adapt to new additions to its body. The robot is controlled by a neural network - software that mimics the brain's learning process. This comprises a set of interconnected processing nodes which can be trained to produce desired actions.

For example, if the goal is to remain balanced and the robot receives inputs from sensors that it is tipping over, it will move its limbs in an attempt to right itself. Such actions are shaped by adjusting the importance, or weighting, of the input signals to each node. Certain combinations of these sensor inputs cause the node to fire a signal - to drive a motor, for example. If this action works, the combination is kept. If it fails, and the robot falls over, the robot will make adjustments and try something different next time. Finding the best combinations is not easy - so roboticists often use an evolutionary algorithm to ‘evolve’ the optimal control system. The application randomly creates large numbers of control ‘genomes’ for the robot. These behaviour patterns are tested in training sessions, and the most successful genomes are ‘bred’ together to create still better versions - until the best control system is arrived at. The robot can also adapt to newly acquired vision, and learn how to avoid or seek light when given a camera.

More information:


10 February 2009

Violent Games Help Fire Safety

The software code underlying violent computer games can be used to train people in fire safety, new academic research has found. Commercial games such as Doom 3 and Half Life 2 can be used to build virtual worlds to train people in fire evacuation procedures by applying the games' underlying software code, according to the Durham University researchers. A 3D model of a real world building and three fire evacuation scenarios were programmed complete with smoke and fire. The Durham experts say this is significantly quicker and more cost-effective than using traditional virtual reality toolkits or writing the code from scratch. The use of virtual reality toolkits often requires more advanced programming skills and integration of different components to build a 3D model making it a lengthier and more expensive process, say the scientists. The study, published in the Fire Safety Journal, found that games from the First Person Shooter genre, in which the player sees the environment from the first person perspective and normally involves the player using weapons to fight a number of enemies, have the greatest capability and resources for modification. The scientists say virtual environments can be used to help identify problems with the layout of a building, help familiarise people with evacuation routines and teach people good practice in fire safety.

Previous research has shown that there are often two main reasons why evacuations in a real scenario fail. One is poor layout of the building and the other is people not following evacuation procedures due to panic or unfamiliarity with the protocol. In the study, which was partly funded by the Nuffield Foundation, the researchers looked at the capabilities of a number of commercial computer games such as Far Cry, Quake III Arena, Doom 3, F.E.A.R., Half-Life 2, and Counter-Strike: Source. They looked at the underlining game technology, including the capabilities for 3D rendering, sound, user input and word dynamics. The research team found that these games have some significant advantages. They are robust and extensively tested, both for usability and performance, work on off-the-shelf systems and can be easily disseminated, for example via online communities. The scientists say the code within these games also enables easy programming of features such as wind, smoke, fire and water. As part of the study, participants tested the virtual environment. They were told of a fire in the building and asked to find their way out. Most people found the simulated environment to be realistic although those with gaming experience performed better than those without. In other studies, gaming technology has been used to build virtual environments to simulate lab accidents, teach people about cooking safety, and to help 'treat' people suffering from arachnophobia and claustrophobia.

More information:


08 February 2009

WARP - Wireless Research Platform

Nothing kills innovation like having to reinvent the wheel. Imagine how dull your diet would be if you had to build a new stove and hammer out a few cooking pots every time you wanted to test a new recipe. Until just a couple of years ago, electronics researchers testing new high-speed wireless technologies faced just this sort of problem; they had to build every test system completely from scratch. So, the Center for Multimedia Communication (CMC) of Rice University set out to change that in 2006 by creating a turnkey, open-source platform -- the stove, pots and kitchen utensils, if you will -- that would let wireless researchers expand their tech menus. In just two short years, the platform -- dubbed WARP -- has whetted the appetites of heavyweights like Nokia, MIT, Toyota, NASA and Ericsson, and it's already being used to test everything from low-cost wireless Internet in rural India to futuristic "unwired" spacecraft. WARP stands for "wireless open-access research platform," and physically, WARP looks like something from the guts of a desktop computer. It's a collection of boards containing a powerful processor and all the transmitters and other gadgets needed for high-end wireless communications. What makes WARP boards so effective is their flexibility. When researchers need to test several kinds of radio transmitters, wireless routers and network access points, all they need to do is write a few programs that allow the WARP board to become each of those devices.

The concept is already starting to pay off. Motorola is using the system to test an entirely new low-cost architecture for wireless Internet in rural India. It's the sort of low-profit-margin project that probably wouldn't have gotten beyond the drawing board if not for WARP, he said. Another early adopter, NASA, is using WARP to look for ways to save weight, cost and complexity in the wiring systems for future spacecraft. The cognitive wireless concept stems from the fact that up to half of the nation's finite wireless spectrum is unused at any given time. Researchers have talked for years about designing smart, "cognitive" networks that can shift frequencies on the fly, opening up vast, unused amounts of spectrum for consumer use. WARP provides an entry point for people to test new ideas about cognitive wireless, researchers are answering the fundamental questions: how much spectrum can really be reused without hurting current sporadically used services and more importantly, build practical proof-of-concept prototypes, etc. Several large wireless companies are using WARP to test schemes for wireless phone networks that can transfer data up to 100 times faster than current 3G networks. Toyota is using WARP to test car-to-car communications -- systems that automotive engineers hope to use in the future for collision avoidance, traffic management and more. In another case, some users were partially disassembling the boards to add new functions. It was still cheaper than starting from scratch, so it made sense, but it wasn't something CMC had expected.

More information: