27 February 2019

Gestures for Social Robots

Social robots are designed to communicate with human beings naturally, assisting them with a variety of tasks. The effective use of gestures could greatly enhance robot-human interactions, allowing robots to communicate both verbally and non-verbally. Researchers at Vrije Universiteit Brussel, in Belgium, have recently introduced a new approach based on a generic gesture method to study the influence of different design aspects. The method devised by this team of researchers could overcome difficulties in transferring gestures to robots of different shapes and configurations. Users can input a robot's morphological information and the tool will use this data to calculate the gestures for that robot. To ensure that their method would be applicable to different types of robots, the researchers drew inspiration from a human base model. This model consists of different chains and blocks, which are used to model the various rotational possibilities of humans.

The researchers assigned a reference frame to each joint block using the human base model as a reference to construct the general framework behind their method. As different features are important for different kinds of gestures, the method devised by the researchers is designed to work in two different modes, namely the block mode and end effector mode. The block mode is used to calculate gestures such as emotional expressions in instances when the overall arm placement is crucial. The end effector mode, on the other hand, calculates gestures in situations in which the position of the end-effector is important, such as during object manipulation or pointing. In their study, the researchers applied their method to the virtual model of a robot called Probo. They used this example to illustrate how their method could help to study the collocation of different joints and joint angle ranges in gestures.

More information:

24 February 2019

Understanding Optical Illusions Using Computer Vision

Optical illusions, images that deceive the human eye, are a fascinating research topic, as studying them can provide valuable insight into human cognition and perception. Researchers at Flinders University, in Australia, have recently carried out a very interesting study using a computer vision model to predict the existence of optical illusions and the degree of their effect.

In their study, the researchers evaluated a computational filtering model that is designed to model the lateral inhibition of retinal ganglion cells and their responses to different geometric illusions. Adopting this approach, the researchers hoped to achieve a better understanding of these illusions, predicting the degree of their effect.

More information:

23 February 2019

Bionic Hand Allows Proprioception

Researchers have developed a next-generation bionic hand that allows amputees to regain their proprioception. The bionic hand, developed by researchers from EPFL, the Sant'Anna School of Advanced Studies in Pisa and the A. Gemelli University Polyclinic in Rome, enables amputees to regain a very subtle, close-to-natural sense of touch. Scientists managed to reproduce the feeling of proprioception, which is our brain's capacity to instantly and accurately sense the position of our limbs during and after movement, even in the dark or with our eyes closed.

This next-generation device allows patients to reach out for an object on a table and to ascertain an item's consistency, shape, position and size without having to look at it. The prosthesis has been successfully tested on several patients and works by stimulating the nerves in the amputee's stump. The nerves can then provide sensory feedback to the patients in real time – almost like they do in a natural hand. Results show that amputees can effectively process tactile and position information received simultaneously via intraneural stimulation.

13 February 2019

Shape Shifting Robot

This robot can melt and re-form its legs to change how it walks. It can produce different walking styles by melting and then re-solidifying its structure, helping it get around obstacles. This small, four-legged robot has a 3D-printed plastic structure with 'shape-morphing joints' that can be selectively melted and hardened to optimize its legs for different motions.

A wire that heats up when a voltage is applied is wrapped around the joints. It takes about 10 seconds for them to soften. The simple system lets the robot switch between a number of different leg positions to let it climb over, or lower itself beneath, obstacles. The system could improve robots’ capabilities without adding cost, weight, or complexity.

More information:

12 February 2019

US Army Soldiers Will Wear Microsoft’s HoloLens

Microsoft has won a $480 million deal to supply more than 100,000 augmented-reality HoloLens headsets to the US Army, Bloomberg reports. The Army plans to use the headsets for combat missions as well as training. The technology will be adapted to incorporate night vision and thermal sensing, offer hearing protection, monitor for concussion, and measure vital signs like breathing and readiness.

AR firm MagicLeap also bid for the contract, according to Bloomberg. HoloLens is used for training by the US and Israeli military already, but this would be the first time it’s been used for live combat. It’s another example of how AR is being adopted far more enthusiastically by organizations than consumers. The deal is more good news for Microsoft, which overtook Apple as the world’s most valuable company.

More information: