19 June 2020

Using Human Demonstrations Robots Learn Locomotion Behaviors

Robots are a major part of our future, and researchers around the world have been working hard at enabling smooth locomotion styles in humanoid and legged robots alike. Now a team of researchers from the University of Edinburgh in Scotland has put together a framework for training humanoid robots to walk just like us, humans, by using human demonstrations. The team's framework works off of a unique reward design that utilizes motion caption data of humans walking as part of the training process. It then combines this with two specialized hierarchical neural architectures: a phased-function neural network (PFNN) and a mode adaptive neural network (MANN). 


The wonderful news about the team's framework was that it even enabled the humanoid robots to operate on uneven ground or external pushes. The team's findings suggest that expert demonstrations, such as humans walking, can majorly enhance deep reinforcement learning techniques for training robots on a number of different locomotion styles. Ultimately, these robots could move just as swiftly and easily as humans, also while achieving more natural and human-like behaviors. At the moment all the research has been carried out through a simulation, the next steps involve trying the framework out in real life.

More information:

18 June 2020

Olfactory VR

OVR Technology, has introduced the Architecture of Scent (AOS) to bring olfaction to VR through hardware, software, and scentware components. After two and a half years of development, the company’s primary use cases are in healthcare, training and education, with an eye towards gaming and immersive experiences for the consumer audience. 


Capturing scents often means going to the place of origin, sampling the air and ingredients, teasing out its different components, and then replicating scents in the lab; one geographical area could mean dozens to hundreds of tests. OVR Technology is also designed to recreate complex sensory environments, rather than just one scent at a time.

More information:

14 June 2020

Underwater WiFi

With our current technology, divers use hand signals, radio, or acoustic or digital light signals to communicate. While these allow effective communication, they have their limitations. Acoustic signals support long distances, but with a very limited data rate. Visible light can travel far and carry lots of data, the problem is that the narrow light beams require a clear line of sight between transmitters and receivers. Radio, meanwhile, can only carry data through short distances underwater. At the moment streaming video from under the sea simply isn't accessible. Researchers, from King Abdullah University of Science and Technology in Saudi Arabia, built an underwater wireless system that they've dubbed Aqua-Fi. Aqua-Fi supports internet services, such as multimedia message sending via either LEDs or lasers.


The LEDs provide a low-energy short-distance communication option, while lasers need more power but can carry data further. The researchers built the prototype using green LEDs and a 520-nanometer laser. Both were used to send data from a small computer to a light detector connected to another computer. The first computer converted photos and videos into a series of 1s and 0s, which were then transferred via a light beam that turns on and off at very high speeds to transmit the signal. The light detector senses the variation in the light speed and translates it back into the computer language of 1s and 0s. This is converted by the receiving computer into the streamed footage or other multimedia. During their tests, the team was able to record maximum data transfer speed of 2.11 megabytes per second and an average delay of 1.00 millisecond for a round trip.

More information: