28 May 2013

WorldKit - Kinect Plus Projector

WorldKit combines cameras, projectors and computers to allow everyday surfaces like walls, tables, doors and worktops to host interactive controllers for gadgets including TVs, digital video recorders, hi-fis or room lighting. The system uses a Microsoft Kinect depth camera to pinpoint which surface your swishing hand is requesting to become a controller. As you move your hand back and forth, you say out loud what you want the surface to turn into.


WorldKit's software uses voice recognition to work out what type of remote you want and a digital projector on the ceiling beams an image of that controller onto the chosen surface. The Kinect camera then works out which buttons you are pressing. The system will be useful when small projectors have become cheap and power-efficient enough to be dotted around our homes. The system could also beam interactive cookery instructions onto a kitchen worktop.

More information:

24 May 2013

Can Games Change the World

Serious games, which have addressed issues as varied as the Middle East conflict through to sexual coercion among teenagers, have gained the attention of governments around the world. But can they really directly affect the issues they cover? Computer games are regularly criticised as a waste of time and for offering graphic depictions of violence. Regardless of the truth of the opinion, a form of game offering a different side has increased in popularity - the serious game.


Serious games focus on real-world situations and events in a way that it is hoped educates the players and provokes debate in a community. Three billion hours a week are spent on playing games, mainly as a pastime rather than having any large global effects, and developers hope some of that time can be harnessed for the greater good. One game, a ‘World Without Oil’, explored the theory that oil was running out and offered players the chance to come up with ways to deal with the impending crisis.

More information:

21 May 2013

Robotic Insects

Last summer, in a Harvard robotics laboratory, an insect took flight. Half the size of a paper clip, weighing less than a tenth of a gram, it leapt a few inches, hovered for a moment on fragile, flapping wings, and then sped along a preset route through the air. The demonstration of the first controlled flight of an insect-sized robot is the culmination of more than a decade’s work, led by researchers at the Harvard School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard. Inspired by the biology of a fly, with submillimeter-scale anatomy and two wafer-thin wings that flap almost invisibly, 120 times per second, the tiny device not only represents the absolute cutting edge of micromanufacturing and control systems, but is an aspiration that has impelled innovation in these fields by dozens of researchers across Harvard for years.


The tiny robot flaps its wings with piezoelectric actuators — strips of ceramic that expand and contract when an electric field is applied. Thin hinges of plastic embedded within the carbon fiber body frame serve as joints, and a delicately balanced control system commands the rotational motions in the flapping-wing robot, with each wing controlled independently in real time. At tiny scales, small changes in airflow can have an outsized effect on flight dynamics, and the control system has to react that much faster to remain stable. Applications of the project could include distributed environmental monitoring, search-and-rescue operations, or assistance with crop pollination, but the materials, fabrication techniques, and components that emerge along the way might prove to be even more significant (i.e. the pop-up manufacturing process could enable a new class of complex medical devices).

More information:

19 May 2013

Underwater University Lectures

The ground-breaking underwater marine biology lectures were the first of their kind, adding to the teaching, educational and learning experience during dives on tropical coral reef systems. The lectures were held during the annual field trip to the Wakatobi Marine National Park in Indonesia, organised by the University's School of Biological Sciences for its students. The serious challenges threatening the future of the world's coral reefs are the backbone of major research being carried out by the University's internationally-recognised Coral Reef Research Unit (CRRU). It looks at the impact of climate change on coral reefs and how to work with nature to find a solution. For the underwater lectures, researchers used specialised audio equipment so he could talk to students underwater, explaining exactly what they were seeing as they were seeing it. 

 
Using a University of Essex special teaching grant, researchers were able to buy an audio system which, to date, has never been used for formal lecturing and is only used by TV presenters and some professional divers. They wore a full face mask which included a microphone and the students wore headsets so they could hear him talk. A hydrophone -- an underwater microphone − was then positioned in the water which was linked to a control box and recorder on a boat. With over 1,000 videos taken during the underwater lectures, adding up to 15 hours of footage, these will prove to be a valuable virtual field course resource for students who are not able to travel to Indonesia but can still get an insight into the experience whilst also providing a great ‘listen again’ opportunity for participating students.

More information:

18 May 2013

Controlling Robots with Thoughts

Facial grimaces generate major electrical activity (EEG signals) across our heads, and the same happens when Angel concentrates on a symbol, such as a flashing light, on a computer monitor. In both cases the electrodes read the activity in the brain. The signals are then interpreted by a processor which in turn sends a message to the robot to make it move in a predefined way.


The user can make use of the movements of the eyes, eyebrows and other parts of their face. Using the eyebrows they can select which of the robot's joints they want to move. The user can focus on a selection of lights on the screen. The robot’s movements depend on which light the user selects and the type of activity generated in the brain.

More information:

16 May 2013

Future of Mobile Phones

The mobile telephones of the future will be able to see, shrink while becoming larger, and slip into their users’ skins. That terse statement summarizes the recently released results of a thorough look at the next ten to fifteen years of mobile telephony by the University of Darmstadt’s ‘Future Internet’ research cluster. The displays of future mobile telephones will merge virtual and physical reality. They will enrich images that their cameras capture with other information. The mobile telephones of the future will need large displays, but be capable of being shrunk to the size of a pencil. Although displays that can be rolled up and folded will take care of that, users will have their hands full simultaneously manipulating the display and the telephone’s controls. 


Future mobile-telephone networks will have to be capable of handling much higher transmission rates than their current counterparts. Mobile telephones and their networks will have to be more flexible in dealing with variations in signal levels. Mobile telephones will have to return responses from the ‘cloud’ on a millisecond time scale, where a portion of the ‘cloud’ will have to shift to mobile-telephone users’ immediate vicinities. The Darmstadt roadmap also envisions how future mobile telephones will become the heart of new security concepts. Since mobile telephones are handling growing numbers of critical smart-phone services, such as opening doors or handling payment of tolls, legal and financial risks will be involved.

More information:


10 May 2013

Human Empathy for Robots

In two new studies, researchers sought to measure how people responded to robots on an emotional and neurological level. In the first study, volunteers were shown videos of a small dinosaur robot being treated affectionately or violently. In the affectionate video, humans hugged and tickled the robot, and in the violent video, they hit or dropped him. Scientists assessed people's levels of physiological excitation after watching the videos by recording their skin conductance, a measure of how well the skin conducts electricity. When a person is experiencing strong emotions, they sweat more, increasing skin conductance. The volunteers reported feeling more negative emotions while watching the robot being abused. Meanwhile, the volunteers' skin conductance levels increased, showing they were more distressed. 


In the second study, researchers use functional magnetic resonance imaging (fMRI) to visualize brain activity in the participants as they watched videos of humans and robots interacting. Again, participants were shown videos of a human, a robot, and, this time, an inanimate object being treated with affection or abuse. In one video, a man appears to beat up a woman, strangle her with a string and attempt to suffocate her with a plastic bag. In another, a person does the same things to the robot dinosaur. Affectionate treatment of the robot and the human led to similar patterns of neural activity in regions in the brain's limbic system, where emotions are processed, fMRI scans showed. But the watchers' brains lit up more while seeing abusive treatment of the human than abuse of the robot, suggesting greater empathy for the human.

More information:

09 May 2013

BCIs Closer to Mainstream

A few weeks ago, engineers sniffing around the programming code for Google Glass found hidden examples of ways that people might interact with the wearable computers without having to say a word. Among them, a user could nod to turn the glasses on or off. A single wink might tell the glasses to take a picture. But don’t expect these gestures to be necessary for long. Soon, we might interact with our smartphones and computers simply by using our minds. In a couple of years, we could be turning on the lights at home just by thinking about it, or sending an e-mail from our smartphone without even pulling the device from our pocket. Farther into the future, your robot assistant will appear by your side with a glass of lemonade simply because it knows you are thirsty. Researchers in Samsung’s Emerging Technology Lab are testing tablets that can be controlled by your brain, using a cap that resembles a ski hat studded with monitoring electrodes, the MIT Technology Review, the science and technology journal of the Massachusetts Institute of Technology, reported this month. The technology, often called a brain computer interface, was conceived to enable people with paralysis and other disabilities to interact with computers or control robotic arms, all by simply thinking about such actions. Before long, these technologies could well be in consumer electronics, too. Some crude brain-reading products already exist, letting people play easy games or move a mouse around a screen.


NeuroSky, a company based in San Jose, Calif., recently released a Bluetooth-enabled headset that can monitor slight changes in brain waves and allow people to play concentration-based games on computers and smartphones. These include a zombie-chasing game, archery and a game where you dodge bullets — all these apps use your mind as the joystick. Another company, Emotiv, sells a headset that looks like a large alien hand and can read brain waves associated with thoughts, feelings and expressions. The device can be used to play Tetris-like games or search through Flickr photos by thinking about an emotion the person is feeling — like happy, or excited — rather than searching by keywords. Muse, a lightweight, wireless headband, can engage with an app that ‘exercises the brain’ by forcing people to concentrate on aspects of a screen, almost like taking your mind to the gym. Car manufacturers are exploring technologies packed into the back of the seat that detect when people fall asleep while driving and rattle the steering wheel to awaken them. But the products commercially available today will soon look archaic. The current brain technologies are like trying to listen to a conversation in a football stadium from a blimp. To really be able to understand what is going on with the brain today you need to surgically implant an array of sensors into the brain. In other words, to gain access to the brain, for now you still need a chip in your head.

More information: