29 March 2018

Driving a Real Car Using VR

At Nvidia’s GTC conference the company unveiled a wild technology demo and it’s straight out of Black Panther. Simply put, a driver using virtual reality was remotely controlling a car in real life. The driver was sitting on the stage of the convention center wearing an HTC Vive and seated in a cockpit-like car with a steering wheel. Using Nvidia’s Holodeck software, a car was loaded. Then, a video feed appeared showing a Ford Fusion behind the convention center.


The demo at the show was basic but worked. The driver in VR had seemingly complete control over the vehicle and managed to drive it, live but slowly, around a private lot. He navigated around a van, drove a few hundred feet and parked the car. The car was empty the whole time. Nvidia didn’t detail any of the platforms running the systems nor did he announced availability. The demo was just a proof of concept.

More information:

25 March 2018

Personalizing Wearable Devices

Researchers from the Harvard John A. Paulson School of Engineering and Applied and Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have developed an efficient machine learning algorithm that can quickly tailor personalized control strategies for soft, wearable exosuits. The researchers used so-called human-in-the-loop optimization, which uses real-time measurements of human physiological signals, such as breathing rate, to adjust the control parameters of the device.

As the algorithm honed in on the best parameters, it directed the exosuit on when and where to deliver its assistive force to improve hip extension. The combination of the algorithm and suit reduced metabolic cost by 17.4 percent compared to walking without the device. This was a more than 60 percent improvement compared to the team's previous work. Next, the team aims to apply the optimization to a more complex device that assists multiple joints, such as hip and ankle, at the same time.

More information:

24 March 2018

Robots Think and Plan Abstractly

Researchers from Brown University and MIT have developed a method for helping robots plan for multi-step tasks by constructing abstract representations of the world around them. Their study is a step toward building robots that can think and act more like people. For the study, the researchers introduced a robot named Anathema Device (or Ana, for short) to a room containing a cupboard, a cooler, a switch that controls a light inside the cupboard, and a bottle that could be left in either the cooler or the cupboard. They gave Ana a set of high-level motor skills for manipulating the objects in the room -- opening and closing both the cooler and the cupboard, flipping the switch and picking up a bottle. Then they turned Ana loose to try out her motor skills in the room, recording the sensory data from her cameras and actuators before and after each skill execution. Those data were fed into the machine-learning algorithm developed by the team. 


The researchers showed that Ana was able to learn a very abstract description of the environment that contained only what was necessary for her to be able perform a particular skill. For example, she learned that in order to open the cooler, she needed to be standing in front of it and not holding anything (because she needed both hands to open the lid). She also learned the proper configuration of pixels in her visual field associated with the cooler lid being closed, which is the only configuration in which it's possible to open it. She learned similar abstractions associated with her other skills. She learned, for example, that the light inside cupboard was so bright that it whited out her sensors. So in order to manipulate the bottle inside the cupboard, the light had to be off. She also learned that in order to turn the light off, the cupboard door needed to be closed, because the open door blocked her access to the switch.

More information:

18 March 2018

Visit to the Ship & Marine Hydrodynamics Lab

On the 7th of March 2018, I have visited the Laboratory for Ship and Marine Hydrodynamics at the National Technical University of Athens which belongs to the School of Naval Architecture and Marine Engineering. The Ship Model Towing Tank of the Laboratory, measuring 100 m x 5 m x 3.5 m, is unique in Greece and operational since 1979 at the Zographos campus. The carriage of the tank weights 5 mt is computer controlled and reaches a maximum speed of 5.5 m/s. 


The activities of the Laboratory pertain to university education and research in the area of ship and marine hydrodynamics. Simultaneously (and in second shift if necessary) the Laboratory covers the needs of the Greek Shipbuilding and Shipping industry, as well as the similar need of the public sector. The Laboratory is in addition very active in conducting sponsored research in the framework of Greek and European research programs.

More information:

17 March 2018

Self-Aware Virtual Predator

Scientists built an artificially intelligent ocean predator that behaves a lot like the original flesh-and-blood organism on which it was modeled. The virtual creature, Cyberslug, reacts to food and responds to members of its own kind much like the actual animal, the sea slug Pleurobranchaea californica, does.


Unlike most other AI entities, Cyberslug has a simple self-awareness. It relates its motivation and memories to its perception of the external world, and it reacts to information on the basis of how that information makes it feel. The model uses sophisticated algorithms to simulate Cyberslug's competing goals and decision-making.

More information: