30 July 2014

Art by Algorithm

A computer algorithm that modifies images by mimicking the rules of natural selection can work with people to evolve novel works of art. A new breed of art has evolved. A computer program has been built that creates digital artworks using algorithms that mimic natural selection. Researchers at Nagoya University in Japan built the software after learning how artistic methods are passed down through generations. Paintings that have remained to the present were painted by scaling, rotating and combining motifs that had already existed. This appeared to echo the process of biological evolution, in which traits are inherited and altered from parent to child.


To use the program, a person first indicates the style of art that they like. Then they select a picture from a few preloaded images to feed into an algorithm. The algorithm mutates the image in different ways: chopping it in half, overlaying it on another image or randomly altering it. The resulting images are either culled or kept depending on how closely they adhere to the user's initial stylistic choices, and the process repeats. The person can stop the process at any time and select an image they like, or let it keep running. Finally, the person adds colour to the image, as the program currently manipulates the images in black and white.

More information:

23 July 2014

Smart Sensors Offer Haptic Feedback

Fraunhofer ISC's Centre for Smart Materials (CeSMa) has demonstrated smart materials that can be used to create intelligent sensors with haptic feedback. Intelligent and adaptive materials possess properties that react to external factors such as magnetic or electrostatic fields. For instance, consistency, flow properties, expansion behaviour, or pressure sensibility can change under influence of these external factors. These properties can be used to make these materials act as sensors or actuators. The CeSMa, an entity of the Fraunhofer Institute for Silicate Research (ISR) in Würzburg, Germany, uses such materials to develop prototypes for many industry branches. Switches and pressure sensors based on highly sensitive piezoelectric layers or dielectric elastomer sensors (DES) -- which are extremely stretchy -- can adapt to a variety of haptic requirements and mechanical sensor functions. While DES are more suitable for soft surfaces, piezoelectric sensors can be utilized more easily with hard materials such as steel. DES represented a new category of mechanical sensors that can be used to measure strain, forces, and pressure. It can be integrated into structures that are subject to significant deformation and strain. An example would be seat occupancy sensors that provide additional information on load distribution. CeSMa researchers succeeded in developing innovative sensor mats that react very sensitively to pressure. Car seats equipped with these intelligent DES sensor mats can sense the position of the passenger and help to reduce the risk of injury during an accident. Other potential applications could be in the field of geriatric care.


Thin piezoelectric layers on steel foil carriers offer great design freedom with respect to size, shape, and curvature. In addition, this technology can be used to implement "invisible" switches and sensors in car interiors, for instance on the instrument panel. Insensitive to dust and dirt, they enable implementing functional surfaces even in rough environments. In addition, electrostatic fields can be integrated into the foils, which can serve as proximity sensors. Thus, the control panels generate a proximity signal and at the same time provide a haptic feedback when activated. The combination of proximity and pressure sensor with haptic feedback offers new options in the design of human-machine interfaces (HMIs). The sensor concepts developed by the CeSMa also make it possible to monitor safety relevant components, enabling continuous or periodic monitoring. Another technique developed by the CeSMa is suited to detecting structural damage in glass, carbon fibre, or steel structures. The Würzburg scientists developed ultrasound transducers based on piezoelectric materials that transform mechanical strain into electric signals or electric control voltages into movement. This principle can also be applied to carrier materials with high operating temperatures. Towards this end, the Würzburg researchers developed high-temperature signal transducers based on novel monocrystal materials that can be used for permanent structural monitoring in high-temperature environments. An application example for these transducers is monitoring hot pipelines operating at temperatures of up to 600°C in chemical and power plants.

More information:

22 July 2014

RoboCup

MESSI v the Machine was how some commentators touted the World Cup final, inspired by the disciplined way the German team dismantled Brazil in the semi-finals. But despite such caricatures of Teutonic precision, German players are only human. So as the latest edition of RoboCup, a competition for robot soccer players rather than flesh-and-blood ones, kicks off on July 19th in João Pessoa, Brazil’s easternmost city, a question that will be on many minds is: when will real machines conquer the sport? When the first RoboCup was held, in 1997, those who launched it set a target of 2050 for engineers to produce a humanoid robot team that would rival the champions of the older competition. Judged by the plodding clumsiness of some of the RoboCup players, that goal might seem far-fetched. But it is easy to underestimate how quickly robotics is improving. Self-driving cars and delivery drones, which seemed hopelessly futuristic just a decade ago, are now topics of serious business interest.


By comparison with the corporate investments of the likes of Google in electric cars, the teams competing in this year’s RoboCup have shoestring budgets. But the tournament includes features that the organisers hope will accelerate innovation without the incentive of cash. One is a clever combination of competition and co-operation. Leading up to the playoffs, teams prepare new strategies and fine-tune their hardware and software in secret. Immediately after the finals have been played, however, all must publish their methods, thus raising the bar for everyone the following year. Another feature is that there are limits to how far teams can push their hardware, to encourage them to develop smart routes to victory, rather than using mere brute force. RoboCup range from a little league of miniature cylinders on wheels, in which each entire team is controlled by one computer using input from overhead cameras, to a fully limbed humanoid league, akin to R2-D2’s faithful companion, C-3PO.

More information:

20 July 2014

Mind Controlled Google Glass

The free, open-source ‘MindRDR’ software connects Google Glass with a low-cost brainwave-reading headset that enables users to operate the device by concentrating instead of controlling it with voice commands or by tilting the head.

Though the app can so far only take photos and publish them to the internet, it could point to a future generation of touch-free interfaces for consumer technology that don’t need users to wave their hands around or talk embarrassingly to an inanimate object.

More information:

17 July 2014

Ethical & Autonomous Robots

The engineering of autonomous, morally competent robots might include a perfectly crafted conscience capable of distinguishing right from wrong and acting on it. In the near future, artificial intelligence entities might be better moral creatures than we are, or at least better decision makers when facing certain dilemmas. Since 2002, the ethics of artificial intelligence was divided into two subfields: machine ethics and roboethics.


Naturally, to be able to create such morally autonomous robots, researchers have to agree on some fundamental pillars: what moral competence is and what humans would expect from robots working side by side with them, sharing decision making in areas like healthcare and warfare. At the same time, another question arises: What is the human responsibility of creating artificial intelligence with moral autonomy? And the leading research question: What would we expect of morally competent robots?

More information:

 

15 July 2014

Brain Speech Synthesizer

Could a person who is paralyzed and unable to speak, like physicist Stephen Hawking, use a brain implant to carry on a conversation? That’s the goal of an expanding research effort at U.S. universities, which over the last five years has proved that recording devices placed under the skull can capture brain activity associated with speaking. Researchers at the University of California, San Francisco, are working towards building a wireless brain-machine interface that could translate brain signals directly into audible speech using a voice synthesizer.


The effort to create a speech prosthetic builds on success at experiments in which paralyzed volunteers have used brain implants to manipulate robotic limbs using their thoughts. That technology works because scientists are able to roughly interpret the firing of neurons inside the brain’s motor cortex and map it to arm or leg movements. Researchers are now trying to do the same for speech. It’s a trickier task, in part because complex language is unique to humans and the technology can’t easily be tested in animals.

More information:

11 July 2014

Drone Lighting

Lighting is crucial to the art of photography. But lights are cumbersome and time-consuming to set up, and outside the studio, it can be prohibitively difficult to position them where, ideally, they ought to go. Researchers at MIT and Cornell University hope to change that by providing photographers with squadrons of small, light-equipped autonomous robots that automatically assume the positions necessary to produce lighting effects specified through a simple, intuitive, camera-mounted interface. They take the first step toward realizing this vision, presenting a prototype system that uses an autonomous helicopter to produce a difficult effect called ‘rim lighting’ in which only the edge of the photographer’s subject is strongly lit.

 
With the new system, the photographer indicates the direction from which the rim light should come, and the miniature helicopter flies to that side of the subject. The photographer then specifies the width of the rim as a percentage of its initial value, repeating that process until the desired effect is achieved. Thereafter, the robot automatically maintains the specified rim width. If somebody is facing you, the rim you would see is on the edge of the shoulder, but if the subject turns sideways, so that he’s looking 90 degrees away from you, then he’s exposing his chest to the light, which means that you’ll see a much thicker rim light. So in order to compensate for the change in the body, the light has to change its position quite dramatically.

More information:

05 July 2014

HCII 2014 Paper

On the 26th of July, I have presented a co-authored paper with title ‘Effects of gender mapping on perception of emotion from upper body movement in virtual characters’. The paper was presented at the 6th International Conference, VAMR 2014 Held as part of HCI International 2014, Session: Interactive Technologies for Virtual and Augmented Reality, at Heraklion, Crete, Greece, 22-27 June, 2014.


The paper examined investigated the effects of gender congruency on the perception of emotion from upper body movements. Six general categories of emotions were mapped onto virtual characters with male and female embodiments. A significant effect of gender mapping was found in the ratings of perception of three motions (anger, fear and happiness).

A draft version of the paper can be downloaded from here.