30 September 2011

Virtual Monkeys Write Shakespeare

A few million virtual monkeys are close to re-creating the complete works of Shakespeare by randomly mashing keys on virtual typewriters. A running total of how well they are doing shows that the re-creation is 99.990% complete. The first single work to be completed was the poem A Lover's Complaint. It is also a practical test of the thought experiment that wonders whether an infinite number of monkeys pounding on an infinite number of typewriters would be able to produce Shakespeare's works by accident. The virtual monkeys are small computer programs uploaded to Amazon servers.


These coded apes regularly pump out random sequences of text. Each sequence is nine characters long and each is checked to see if that string of characters appears anywhere in the works of Shakespeare. If not, it is discarded. If it does match then progress has been made towards re-creating the works of the Bard. To get a sense of the scale of the project, there are about 5.5 trillion different combinations of any nine characters from the English alphabet. The monkeys are generating random nine-character strings to try to produce all these strings and thereby find those that appear in Shakespeare's works.

More information:

http://www.bbc.co.uk/news/technology-15060310

26 September 2011

The Cyborg in Us All

Within the next decade there is likely to emerge a new kind of brain implant for healthy people who want to interact with and control machines by thought. One technology under development is the electrocorticographic (ECoG) implant, which is less invasive than other devices and capable of riding on top of the brain-blood barrier, sensing the activity of neuron populations and transmitting their communications to the outside world as software commands. Research to study the potential of ECoG implants is being funded by the U.S. Defense Department as part of a $6.3 million Army project to create devices for telepathic communication.


Carnegie Mellon University researchers are most eager to see a ‘two-way direct-brain interface’ that would revolutionize human experience. They took advantage of the implant to see if patients could control the actions in a video game called Galaga using only their thoughts. Patients flick the spaceship back and forth by imagining that they are moving their tongue. This creates a pulse in his brain that travels through the wires into a computer. Thus, a thought becomes a software command. An even less invasive brain-machine interface than the ECoG implant is being researched at Dartmouth College, where scientists are creating an iPhone linked to an electroencephalography headset.

More information:

http://www.nytimes.com/2011/09/18/magazine/the-cyborg-in-us-all.html?_r=1

24 September 2011

RePro3D

Lonely gamers who have felt the pain of being separated by a screen from their favorite personalities now have a way to reach out and touch their game characters, and that new way is RePro3D. A group of researchers from Keio University in Japan have come up with a 3D screen that lets the user, glasses-free, see and touch characters on the screen. The technology is about a 3D parallax display with infrared camera that recognizes the movements of the user's hand and the character on the screen reacts to the movements instantly.


Researchers use retro-reflective projection technology, using materials with special retro-reflective characteristics. This kind of material reflects light that enters back at the same angle it entered. Using this technology enables a display to show images at a different place from the light source. A user's tactile device worn on the fingers is designed to enhance the sensation of touching the objects on a 3D screen. In the future, they plan to build a touchable 3D display system that expands the size of the visible image, so that multiple people can be in the same space, and can share the same image.

More information:

http://www.physorg.com/news/2011-09-lonely-gamers-repro3d-characters-video.html

23 September 2011

AR Gesture Recognition

To make its business software more effective, HP recently paid $10 billion for Autonomy, a U.K. software company that specializes in machine learning. But it turns out that Autonomy has developed image-processing techniques for gesture-recognizing augmented reality (AR). AR involves layering computer-generated imagery on top of a view of the real world as seen through the camera of a smart phone or tablet computer. So someone looking at a city scene through a device could see tourist information on top of the view. Autonomy's new AR technology, called Aurasma, recognizes a user's hand gestures. This means a person using the app can reach out in front of the device to interact with the virtual content. Previously, interacting with AR content involved tapping the screen. One demonstration released by Autonomy creates a virtual air hockey game on top of an empty tabletop—users play by waving their hands.


Autonomy's core technology lets businesses index and search data that conventional, text-based search engines struggle with. Examples are audio recordings of sales calls, or video from surveillance cameras. Aurasma's closest competitor is Layar, a Netherlands company that offers an AR platform that others can add content to. However, Layar has so far largely relied on GPS location to position content, and only recently made it possible to position virtual objects more precisely, using image recognition. And Layar does not recognize users' gestures. Although mobile phones and tablets are the best interfaces available for AR today, the experience is still somewhat clunky, since a person must hold up a device with one hand at all times. Sci-fi writers and technologists have long forecast that the technology would eventually be delivered through glasses. Recognizing hand movements would be useful for such a design, since there wouldn't be the option of using a touch screen or physical buttons.

More information:

http://www.technologyreview.com/communications/38568/

19 September 2011

Caring, Empathetic Robots

Robots may one day learn to care for and nurture one another, according to research by an OU professor. Computer science scientists in the OU Robotic Intelligence and Machine Learning Lab, investigate whether robots can learn to care for one another and, eventually, humans.


Researchers realized most organisms are born with instincts that tell them how to survive, but if an organism is in a rapidly changing environment, these skills may not be applicable, and it will have to learn new skills. From there, the idea of having a nurturer seems most logical.

More information:

http://www.oudaily.com/news/2011/sep/14/ou-professor-conducts-research-robot-care-nurturin/

18 September 2011

3D 'Daddy Long Legs'

Two ancient types of harvestmen, or 'daddy long legs,' which skittered around forests more than 300 million years ago, are revealed in new three-dimensional virtual fossil models published in the journal Nature Communications. An international team, led by researchers from Imperial College London, have created 3D models of two fossilised species of harvestmen, from the Dyspnoi and Eupnoi suborders. The ancient creatures lived on Earth before the dinosaurs, in the Carboniferous period. The 3D models are providing fresh insights into how these ancient eight-legged creatures, whose 1cm bodies were the size of small buttons, survived in Earth's ancient forests and how harvestmen as a group have evolved. Other scientists have previously suggested that harvestmen were among the first groups on land whose bodies evolved into their modern-day form at a time while other land animals such as spiders and scorpions were still at an early stage in their evolution. The researchers say comparing the 3D fossils of the Dyspnoi and Eupnoi species to modern members of these harvestmen groups provides further proof that ancient and modern species were very similar in appearance, suggesting little change over millions of years. The 3D virtual fossil models have also provided the researchers with further proof that the Dyspnoi and Eupnoi lineages had evolved from a common harvestmen ancestor around 305 million years ago. The researchers say their work supports earlier DNA-based studies and is important because it provides a clearer picture of the early evolution of these creatures.


The researchers also found clues as to how both creatures may have lived hundreds of millions of years ago. The team believes that the Eupnoi probably lived in foliage just above the forest floor, which may have helped it to hide from predators lurking on the ground. The 3D model of the Eupnoi revealed that it had long legs with a curvature at the end that are similar to the legs of its modern relatives who use the curved leg parts to grip onto vegetation while moving from leaf to leaf. The researchers also determined that the Eupnoi's body had a very thin and soft outer shell or exoskeleton by analysing a section of the 3D fossil showing a part of its abdomen that had been crushed during the fossilisation process. This indicated to the team the fragility of the Eupnoi's exoskeleton. It is rare to find fossilised remains of Harvestmen because their soft, tiny, fragile bodies are difficult to preserve during the fossilisation process. Only around 33 fossilised species have been discovered so far. Currently, most palaeontologists analyse fossils by splitting open a rock and looking at the creatures encased inside. This means that they can often only see part of a three-dimensional fossil and cannot explore all of the fossil's features. The method used in today's study is called 'computed tomography' and it enables researchers to produce highly detailed virtual models using a CT scanning device, based at the Natural History Museum in London. In this study, scientists took 3142 x-rays of the fossils and compiled the images into accurate 3D models, using specially designed computer software. This research follows on from previous modelling studies carried out by Imperial researchers on other prehistoric creatures including ancient spiders called Anthracomartus hindi and Eophrynus prestivicii, and an early ancestor of the cockroach called Archimylacris eggintoni.

More information:

http://www.sciencedaily.com/releases/2011/08/110823115149.htm

16 September 2011

Will OnLive Kill the Console?

OnLive is a fairly simple idea. Instead of using a console or a computer to run a game for you, the system uses a server over the internet. It's the implications of that idea that, if they work, are nothing short of revolutionary. Your controller or keyboard sends your input over the internet, to an OnLive server which then bounces back to you the result of your action onscreen. There's no physical disc, and not even any download time - you can start a 30-minute game demo in seconds, for free. Or rent or buy games that are linked to your account (UK pricing hasn't been announced yet, US pricing is typically around $5 for a three-day rental, $50 for a new game). And that means you can take them anywhere, play them on anything.


The same game, with progress tracked, can be played on a PC, Mac, big-screen TV with a ‘micro-console’ and controller, Android tablet or iPad (from this autumn) and even, in the future, on an internet-enabled TV or Blu-ray player. So you can start a game at work in your lunchtime, continue it on a tablet on wi-fi on the way home and finish it on your big TV. For games companies, that means no piracy, and no physical distribution hassles. For gamers, as well as portability and instant availability, it also means you can watch anyone else's game (even talk to them while they play), from a megalomania-inducing bank of screens of games happening right that second.

More information:

http://games.uk.msn.com/previews/will-onlive-kill-the-console-14092011

05 September 2011

Robot Teaches English

Say ‘How do you do’ to Mike and Michelle, face-to-face tutors for English learners. They'll correct your grammar, answer questions, converse on a variety of topics, be there 24/7, and won't charge a dime. And they're doing very well, thank-you. The on-screen ‘English Tutor’ interactive robots and their creator (from Pasadena City College), are heading to England's Exeter University in October as one of four finalists in the 2011 Loebner Prize for Artificial Intelligence.

Over the years, the program has grown more sophisticated, now with robots able to chat on 25 topics in 2,000 available conversations. The robots can detect the 800 most common errors learning English-speakers make, Lee said, and know all the irregular verbs, provide different tenses, explain grammatical terms and give advice on how to learn English. Users still have to type in their questions, rather than speak, although he said users with speech recognition software can talk into the microphone.

More information:

http://www.pasadenastarnews.com/news/ci_18767575