28 November 2011

Stonehenge Hidden Landscapes

Archaeologists at the University of Birmingham are heading to Stonehenge to lead the Britain’s biggest-ever virtual excavation, a far from superficial look at the Stonehenge landscape. The Stonehenge Hidden Landscapes Project will use the latest geophysical imaging techniques to visually recreate the iconic prehistoric monument and its surroundings as it was more than 4000 years ago. The project begins midway through one of Stonehenge’s busiest tourist seasons for years. With more than 750,000 visitors annually, the site is one of the UK’s most popular tourist hotspots.


The Stonehenge Hidden Landscapes Project, started early July, aims to bring together the most sophisticated geophysics team ever to be engaged in a single archaeological project in Britain to work alongside specialists in British prehistory and landscape archaeology in a three-year collaboration. The scientists will map the Wiltshire terrain as well as virtually excavate it, accurately pinpointing its buried archaeological remains. When processed, the millions of measurements will be analysed and even incorporated into gaming technology to produce 2D and 3D images.

More information:

http://heritage-key.com/blogs/ann/stonehenge-hidden-landscapes-project-virtual-excavation-digital-recreation

25 November 2011

Contact Lens Displays Pixels on Eyes

The future of augmented-reality technology is here - as long as you're a rabbit. Bioengineers have placed the first contact lenses containing electronic displays into the eyes of rabbits as a first step on the way to proving they are safe for humans. The bunnies suffered no ill effects, the researchers say. The first version may only have one pixel, but higher resolution lens displays - like those seen in Terminator - could one day be used as satnav enhancers showing you directional arrows for example, or flash up texts and emails - perhaps even video. In the shorter term, the breakthrough also means people suffering from conditions like diabetes and glaucoma may find they have a novel way to monitor their conditions. The test lens was powered remotely using a 5-millimetre-long antenna printed on the lens to receive gigahertz-range radio-frequency energy from a transmitter placed ten centimetres from the rabbit's eye.


To focus the light on the rabbit's retina, the contact lens itself was fabricated as a Fresnel lens - in which a series of concentric annular sections is used to generate the ultrashort focal length needed. They found their lens LED glowed brightly up to a metre away from the radio source in free space, but needed to be 2 centimetres away when the lens was placed in a rabbit's eye and the wireless reception was affected by body fluids. All the 40-minute-long tests on live rabbits were performed under general anaesthetic and showed that the display worked well - and fluroescence tests showed no damage or abrasions to the rabbit's eyes after the lenses were removed. While making a higher resolution display is next on their agenda, there are uses for this small one, A display with a single controllable pixel could be used in gaming, training, or giving warnings to the hearing impaired.

More information:

http://www.newscientist.com/blogs/onepercent/2011/11/electronic-contact-lens-displa.html

22 November 2011

Mimicking the Brain in Silicon

For decades, scientists have dreamed of building computer systems that could replicate the human brain’s talent for learning new tasks. MIT researchers have now taken a major step toward that goal by designing a computer chip that mimics how the brain’s neurons adapt in response to new information. This phenomenon, known as plasticity, is believed to underlie many brain functions, including learning and memory. With about 400 transistors, the silicon chip can simulate the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. The researchers anticipate this chip will help neuroscientists learn much more about how the brain works, and could also be used in neural prosthetic devices such as artificial retinas, says Chi-Sang Poon, a principal research scientist in the Harvard-MIT Division of Health Sciences and Technology. There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. A synapse is the gap between two neurons (known as the presynaptic and postsynaptic neurons). The presynaptic neuron releases neurotransmitters, such as glutamate and GABA, which bind to receptors on the postsynaptic cell membrane, activating ion channels.


Opening and closing those channels changes the cell’s electrical potential. If the potential changes dramatically enough, the cell fires an electrical impulse called an action potential. All of this synaptic activity depends on the ion channels, which control the flow of charged atoms such as sodium, potassium and calcium. Those channels are also key to two processes known as long-term potentiation (LTP) and long-term depression (LTD), which strengthen and weaken synapses, respectively. The MIT researchers designed their computer chip so that the transistors could mimic the activity of different ion channels. While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell. The MIT researchers plan to use their chip to build systems to model specific neural functions, such as the visual processing system. Such systems could be much faster than digital computers. Even on high-capacity computer systems, it takes hours or days to simulate a simple brain circuit. With the analog chip system, the simulation is even faster than the biological system itself. Another potential application is building chips that can interface with biological systems. This could be useful in enabling communication between neural prosthetic devices such as artificial retinas and the brain.

More information:

http://web.mit.edu/newsoffice/2011/brain-chip-1115.html

19 November 2011

Archeovirtual 2011 Seminar

Yesterday, I gave an invited talk to the ‘V-Must Workshop 2: Virtual Heritage, Games and Movie’, Archeovirtual 2011, held at Paestum, Italy. The title of my talk was ‘Serious Games’.


My talk focused on the main technologies used for serious games. In addition, I presented two projects which are currently running at iWARG including crowd modeling and procedural modeling.

More information:

http://www.vhlab.itabc.cnr.it/archeovirtual/workshop.htm

13 November 2011

Tracking Multiple Athletes

EPFL’s Computer Vision Laboratory (CVLab), now has a new tool that makes it possible to follow multiple players at once on a field or court, even when they’re buried under a pile of bodies in a rugby match or crouched behind another player. The athletes are represented on a screen with a superimposed image bearing their jersey color and number, so spectators, referees, and coaches can easily follow individuals without mixing them up. And there’s no need for the players to wear extra gear or RFID chips. The system is made up of eight standard cameras - two on each side of the field or court, two that film from above and two that zoom – and three algorithms. After a tackle, goal, basket, or pileup, the system re-attributes the jersey number to each player automatically. No more getting lost in the crowd.


Three algorithms make the system work. The first detects individuals at a specific moment in time, independently of where they were the moment before or after. To do this, it slices the playing area into small 25 cm2 squares, removes the background in all the images simultaneously, and from this deduces the probability of the presence of a player in each of the small squares. The other two algorithms connect the results obtained for each moment in order to establish individual trajectories. All three use global optimization methods, resulting in a very robust system capable of tracking people in real time in a reliable manner. Researchers work with other applications, like tracking pedestrians to monitor traffic in an area, or following the movement of clients in a store for marketing purposes.

More information:

http://actu.epfl.ch/news/new-technology-tracks-multiple-athletes-at-once/

10 November 2011

Robots and Avatars Seminar

Yesterday, I gave an invited talk to the ‘Robots and Avatars’ workshop, held at Coventry University, Faculty of Engineering and Computing, Department of Computing. The title of my talk was ‘Human-machine interfaces’.


My talk focused on human-machine interfaces for control and communication between humans and machines, focussing on the use of Brain Computer Interfaces (BCI). In particular, I focused on the NeuroSky and Emotiv interfaces for robot control.

More information:

http://iwarg.blogspot.com/2011/11/robots-and-avatars.html

06 November 2011

Controlling An Avatar With Your Brain

In the 2011 movie ‘Source Code’, US Army Captain Colter Stevens has to stop a dangerous terrorist from detonating a bomb on a train. But because he is paralyzed in real-life, Stevens is being sent on the mission through an avatar he guides with his mind. Sounds far-fetched? Too Sci-Fi? One Israeli Professor is taking technology along those lines – further than you could ever imagine, with his latest project: Controlling your very own clone-avatar.


Researchers at the Advanced Virtuality Lab (AVL) at the Interdisciplinary Center (IDC) in Herzliya, Israel, have been studying and experimenting with the next generation of human-computer interfaces and their impact on individuals and society for the last three years, along with an international team of experts. The AVL’s main activity is to build virtual worlds and the interfaces that will be used in the future and investigates human behavior and the human mind in a virtual reality setting.

More information:

http://nocamels.com/2011/10/controlling-an-avatar-with-your-brain-israeli-lab-is-trying/

04 November 2011

A Versatile Touch Sensor

We live in an increasingly touchy-feely tech world, with various ways for smart phones and tablet computers to sense our finger taps and gestures. Now a new type of touch technology, developed by researchers at the University of Munich and the Hasso Plattner Institute, could lead to touch sensitivity being added to everyday items such as clothing, headphone wires, coffee tables, and even pieces of paper. The new touch technology relies on something called Time Domain Reflectometry (TDR), which has been used for decades to find damage in underwater cables. TDR is simple in theory: send a short electrical pulse down a cable and wait until a reflection of the pulse comes back. Based on the known speed of the pulse and the time it takes to come back, software can determine the position of the problem—damage in the line or some sort of change in electrical conductance.


The TDR implementation is straightforward, For one demonstration, researchers taped two parallel strips of copper to a piece of paper. Metal clips connect the copper strips to a pulse generator and detector. Pico-second-long electrical pulses are sent out, and if there's any change in capacitance between the two strips of copper—produced by a finger close to or touching the wires, for instance—part of the pulse is reflected back. An oscilloscope shows the changing waveform produced by the reflected pulse, and software on a connected computer analyzes the waveform to determine the position of the touch. To make a surface touch-sensitive requires only two wires (or metal traces of conductive ink), which can be configured in various patterns to get the necessary coverage. In contrast, a capacitive touch screen like the one in the iPhone uses a matrix of wires coming out of two sides of the screen.

More information:

http://www.technologyreview.com/computing/39036/