29 June 2019

Brain Cells for 3D Vision

Scientists at Newcastle University have discovered neurons in insect brains that compute 3D distance and direction. Understanding these could help vision in robots. In a specially-designed insect cinema, the mantises were fitted with 3D glasses and shown 3D movies of simulated bugs while their brain activity was monitored. When the image of the bug came into striking range for a predatory attack, scientists were able to record the activity of individual neurons. Praying mantises use 3D perception, scientifically known as stereopsis, for hunting. 


By using the disparity between the two retinas they are able to compute distances and trigger a strike of their forelegs when prey is within reach. The neurons recorded were stained, revealing their shape which allowed the team to identify four classes of neuron likely to be involved in mantis stereopsis. The images captured using a powerful microscope show the dendritic tree of a nerve cell – where the nerve cell receives inputs from the rest of the brain – believed to enable this behaviour. Researchers hope mantises can help us develop simpler algorithms for machine vision.

More information:

28 June 2019

AR Spider-Man by Sony

To promote the pending release of Spider-Man: Far From Home on July 4, Sony Pictures has published a mobile app for iOS and Android that brings Spider-Man into fans' homes via augmented reality. Leveraging the surface detection capabilities of ARKit and ARCore, the AR Suit Explorer section of the app presents Spider-Man in his new red and black suit that debuts in the film. And when Spider-Man appears, he does a few tricks, like webbing the user's camera, taking a selfie with his smartphone, and dangling from the ceiling. The app also includes Spider-Man's new "ally" (and traditional comic book nemesis) Mysterio, who blasts green sci-fi energy from his hands, and two other suit variants, the red and blue suit and the homemade suit from the first feature film of the Tom Holland Spider-Man era.


With each AR model, users can scale the size and rotate the figure, as well as move the 3D content around their real world space. The red and blue suit experience also includes hotspots that allow users to view additional details about the suit. Users can also take snapshots and video of the AR experience to share with others. In addition, the app offers some other digital goodies, such as the movie's trailer, photos and videos from Peter Parker's phone, a camera with 2D filters, and a gallery of gifs and stickers for messaging purposes. Users will also find options in the app to purchase tickets to the movie and the Blu-Ray or DVD of the film's predecessor, Spider-Man: Homecoming. The agency also developed AR experiences for the Holo app and with the 8th Wall web-based AR platform to promote Spider-Man movies.

More information:

26 June 2019

Teaching Robots What Humans Want

Researchers combined two different ways of setting goals for robots into a single process, which performed better than either of its parts alone in both simulations and real-world experiments. The team's new system for providing instruction to robots (known as reward functions) combines demonstrations, in which humans show the robot what to do, and user preference surveys, in which people answer questions about how they want the robot to behave. They developed a way of producing multiple questions at once, which could be answered in quick succession by one person or distributed among several people. This update sped the process 15 to 50 times compared to producing questions one-by-one. 


The new combination system begins with a person demonstrating a behavior to the robot. That can give autonomous robots a lot of information, but the robot often struggles to determine what parts of the demonstration are important. People also don't always want a robot to behave just like the human that trained it. For this study, the group used the slower single question method, but they plan to integrate multiple-question surveys in later work. In tests, the team found that combining demonstrations and surveys was faster than just specifying preferences and, when compared with demonstrations alone, about 80 percent of people preferred how the robot behaved when trained with the combined system.

More information:

23 June 2019

Mind-Controlled Robot Arm Works Without a Brain Implant

A team from Carnegie Mellon University (CMU) created the first non-invasive mind-controlled robot arm that exhibits the kind of smooth, continuous motion previously reserved only for systems involving brain implants. Researchers used a combination of sensing and machine learning techniques to create a brain-computer interface (BCI) that could reach signals deep within the brains of participants wearing EEG headcaps.


To test their system, they asked the participants to use it to direct a robotic arm to point at a cursor as it moved around a computer screen. The robotic arm was able to continuously track the cursor in real-time with no jerky movements - an exciting first for a non-invasive BCI system. While much of the focus on mind-controlled robots centers on people with movement disorders or paralysis, researchers envision a future in which the tech is ubiquitous, benefiting the population as a whole.

More information:

15 June 2019

Facebook Virtual Spaces to Improve AI and AR

Will virtual assistants be able to tell the difference between your living room and your kitchen? Or even help you find a missing book or set of keys? With embodied AI -- which relies on data from physical surroundings -- they soon might. Facebook has unveiled an open-source simulation and dataset it hopes will help researchers create more realistic AR and VR, and eventually virtual assistants that can learn about your physical surroundings. Facebook created a new open platform for embodied AI research called AI Habitat, while Facebook Reality Labs (which up until last year was Oculus Research) released a dataset of photorealistic sample spaces it's calling Replica. Both Habitat and Replica are now available for researchers to download on Github. With these tools, researchers can train AI bots to act, see, talk, reason and plan simultaneously.


The Replica data set is made of 18 different sample spaces, including a living room, conference room and two-story house. By training an AI bot to respond to a command like 'bring my keys' in a Replica 3D simulation of a living room, researchers hope someday it can do the same with physical robots in a real-life living room. A Replica simulation of a living room is meant to capture all the subtle details one might find in a real living room, from the velour throw on the sofa to the reflective decorative mirror on the wall. The 3D simulations are photo-realistic; even surfaces and textures are captured in sharp detail, something Facebook says is essential to training bots in these virtual spaces. Some researchers have already taken Replica and AI Habitat for a test-drive. Facebook AI recently hosted an autonomous navigation challenge on the platform.

More information: