30 January 2020

Meena Digital Assistant

Google is attempting to build the first digital assistant that can truly hold a conversation with an AI project called Meena. Digital assistants like Alexa and Siri are programmed to pick up keywords and provide scripted responses. Google has previously demonstrated its work towards a more natural conversation with its Duplex project but Meena should offer another leap forward.


Meena is a neural network with 2.6 billion parameters. Google claims Meena is able to handle multiple turns in a conversation. A neural network architecture called Transformer was released by Google in 2017 which is widely acknowledged to be among the best language models available. A variation of Transformer, along with a mere 40 billion English words, was used to train Meena.

More information:

28 January 2020

Wi-Fi Collaborative SLAM

Researchers have developed new methods for simultaneous localization and mapping (SLAM) that can be used to construct or update maps of a given environment in real time, while simultaneously tracking an artificial agent or robot's location within these maps. Most existing SLAM approaches rely heavily on the use of range-based or vision-based sensors, both to sense the environment and a robot's movements. These sensors can be very expensive and typically require significant computational power to operate properly. Researchers at the Singapore University of Technology and Design, Southwest University of Science and Technology, the University of Moratuwa and Nanyang Technological University have recently developed a new technique for collaborative SLAM that does rely on range-based or vision-based sensors which could enable more effective robot navigation within unknown indoor environments at a lower cost than that of most previously proposed methods. Researchers developed an approach for collaborative simultaneous localization and radio fingerprint mapping called C-SLAM-RF. Their technique works by crowdsensing Wi-Fi measurements in large indoor environments and then using these measurements to generate maps or locate artificial agents.


The system developed by researchers receives information about the strength of the signal coming from pre-existing Wi-Fi access points spread around a given environment, as well as from pedestrian dead reckoning (PDR) processes (i.e., calculations of someone's current position) derived from a smart phone. It then uses these signals to build a map of the environment without requiring prior knowledge of the environment or the distribution of the access points within it. The C-SLAM-RF tool devised by the researchers can also determine whether the robot has returned to a previously visited location, known as ‘loop closure’, by assessing the similarity between different signals' radio fingerprints. Researchers tested their technique in an indoor environment with an area of 130 meters x 70 meters. Their results were highly promising, as their system's performance exceeded that of several other existing techniques for SLAM, often by a considerable margin. In the future, the approach for collaborative SLAM devised by this team of researchers could help to enhance robot navigation in unknown environments. In addition, the fact that it does not require the use of expensive sensors and relies on existing Wi-Fi hotspots makes it a more feasible solution for large-scale implementations.

More information:

24 January 2020

Robot Grips Without Touching

ETH researchers are developing a robotic gripper that can manipulate small and fragile objects without touching them. The technology is based on sound waves. Conventional robotic grippers are prone to damaging fragile objects. To counter this, soft, rubber-like grippers can be used. Although these cause no damage, they are easily contaminated, like a well-used rubber eraser. Additionally, these soft robotic grippers only offer limited positioning accuracy.


Gripping without touching is the principle behind this research project (No-Touch Robotics). The technology is based on an effect that has been exploited for more than 80 years and was first used in space exploration. Ultrasound waves generate a pressure field that humans cannot see or hear. Pressure points are created as the acoustic waves overlay each other, and small objects can be trapped within these points. As a result, they seem to float freely in the air, in an acoustic trap.

More information:

19 January 2020

Robots Built Using Frog Cells

A team of scientists has repurposed living cells (scraped from frog embryos) and assembled them into entirely new life-forms. These millimeter-wide ‘xenobots’ can move toward a target, perhaps pick up a payload and heal themselves after being cut. The new creatures were designed on a supercomputer at UVM and then assembled and tested by biologists at Tufts University. This research designs completely biological machines from the ground up. With months of processing time on the Deep Green supercomputer cluster at UVM's Vermont Advanced Computing Core, the team used an evolutionary algorithm to create thousands of candidate designs for the new life-forms. Attempting to achieve a task assigned by the scientists like locomotion in one direction the computer would, over and over, reassemble a few hundred simulated cells into myriad forms and body shapes. As the programs ran the more successful simulated organisms were kept and refined, while failed designs were tossed out. After a hundred independent runs of the algorithm, the most promising designs were selected for testing. 


Next, researchers transferred the in silico designs into life. First they gathered stem cells, harvested from the embryos of African frogs, the species Xenopus laevis. These were separated into single cells and left to incubate. Then, using tiny forceps and an even tinier electrode, the cells were cut and joined under a microscope into a close approximation of the designs specified by the computer. Assembled into body forms never seen in nature, the cells began to work together. The skin cells formed a more passive architecture, while the once-random contractions of heart muscle cells were put to work creating ordered forward motion as guided by the computer's design, and aided by spontaneous self-organizing patterns allowing the robots to move on their own. These reconfigurable organisms were shown to be able move in a coherent fashion and explore their watery environment for days or weeks, powered by embryonic energy stores. Turned over, however they failed and later tests showed that groups of xenobots would move around in circles.

More information:

17 January 2020

Contact Lens AR

Recently, Mojo Vision announced that it has done just that—put 14K pixels-per-inch microdisplays, wireless radios, image sensors, and motion sensors into contact lenses that fit comfortably in the eyes. The first generation of Mojo Lenses is being powered wirelessly, though future generations will have batteries on board. A small external pack, besides providing power, handles sensor data and sends information to the display. The company is calling the technology Invisible Computing, and company representatives say it will get people’s eyes off their phones and back onto the world around them.


The first application will likely be for people with low vision—providing real-time edge detection and dropping crisp lines around objects. The Mojo Lens from Mojo Vision uses a microdisplay, image sensor, and other electronics built into contact lenses to highlight the edges of nearby objects and to display text and other images to the wearer. Mojo Vision has yet to implement its planned eye-tracking technology with the lenses, but says that’s coming soon, and will allow the wearer to control apps without relying on external devices.

More information: