16 May 2022

Improved AR Experiences Through Google's ARCore Geospatial API

The new Geospatial API gives app developers access to global localization for their AR applications, games, and experiences. Basically, it takes Google's vast Street View imagery database, which comprises tens of billions of images, and matches it using Google's machine learning shenanigans against what your camera sees and where your GPS says you're at, allowing developers to anchor and overlay content by specific coordinates without actually having to scan the physical space. In other words, it extends Google's AR magic and overlays it over the world using geolocation and Street View imagery.

According to Google, a few apps are already using the Geospatial API. Bird and Lime are using it for their e-scooters and e-bikes, and Telstra and Accenture are using it to help users go around stadiums without getting lost. Finally, DOCOMO is leveraging it for a new game that involves virtual dragons and e-robots. If you're a developer, and you want to give this a shot in your app, make sure you get started with it and read more about it on the ARCore developer website. Google has also made available a couple of open-source demos you can play around with.

More information:

https://www.androidpolice.com/google-arcore-geospatial-api-street-view-ar-experience/

05 May 2022

MicroLED Startup Raxium Acquired by Google

Google announced it’s acquired microLED (µLED) designer Raxium. The acquisition was previously reported by The Information in March, however now Google has confirmed in a blog post that it has indeed acquired Raxium, a five-year old startup building microdisplays for use in AR and VR headsets. It’s thought that Raxium will allow for Google to create lighter, cheaper displays for its upcoming AR devices.

While conventional Super AMOLEDs found in smartphones measure around 50 µm per pixel, Raxium says it’s shrunk its microdisplays to feature µLED measuring 3.5 µm per pixel. The company claims its technology has led to an efficiency 5X greater than the previously published world record. Google is undoubtedly gearing up to release XR headsets of some type in the future, which may compete with devices from Apple, Meta, Microsoft and Snap.

More information:

https://www.roadtovr.com/google-microled-ar-vr-xr-raxium/

27 April 2022

VR Police Training

Axon has announced the acquisition of VR studio Foundry 45 which it says will bolster its VR training offerings. Axon is the company behind the well-known Taser stun guns which are employed by police and military forces around the world. In modern times the company has also focused on body cams and software for administration and management of public safety organizations.

Axon has a vested interest in making sure the users of its Taser products are well trained. Not just for the safety of users and targets but also for liability and the company’s image. The promise of VR training is not only that it can feel more real but also that it can be cheaper and easier for public safety organizations to deploy, allowing for more training time with a broader range of scenarios and less overhead.

More information:

https://www.roadtovr.com/axon-taser-foundry-45-acquisition-vr-training/

26 April 2022

Deep Learning Tracks Animals

The ability to capture the behavior of animals is critical for neuroscience, ecology, and many other fields. Cameras are ideal for capturing fine-grained behavior but developing computer vision techniques to extract the animal’s behavior is challenging even though this seems effortless for our own visual system. One of the key aspects of quantifying animal behavior is pose estimation. In a lab setting, it’s possible to assist pose estimation by placing markers on the animal’s body like in motion-capture techniques used in movies. But as one can imagine, getting animals to wear specialized equipment is not the easiest task, and downright impossible and unethical in the wild.

For this reason, researchers at EPFL have been pioneering markerless tracking for animals. Their software relies on deep learning to teach computers to perform pose estimation without the need for physical or virtual markers. Their teams have been developing DeepLabCut, an open-source, deep-learning “animal pose estimation package” that can perform markerless motion capture of animals. In 2018 they released DeepLabCut, and the software has gained significant traction in life sciences: over 350,00 downloads of the software and nearly 1400 citations. In 2020, the Mathis teams released DeepLabCut-Live!, allows researchers to rapidly give feedback to animals they are studying.

More information:

https://actu.epfl.ch/news/time-to-get-social-tracking-animals-with-deep-lear/