Showing posts with label Visualisation. Show all posts
Showing posts with label Visualisation. Show all posts

14 August 2024

Spatial Computing to the Workplace

Looking Glass, an XR display manufacturer, recently revealed that new spatial display products will be available for enterprise audiences this month. The new product rollout includes devices that aim to enhance the firm’s portfolio of 3D monitors. These monitors allow users to experience a spatial computing desktop experience with additional integrated technologies. Looking Glass’ new displays are a great example of how XR and spatial computing technologies are present in various end devices, not just AR/VR/MR headsets and will distribute 16-inch and 32-inch XR displays.

The firm is also rolling out the Looking Glass Go, a more straightforward consumer display device. The new Looking Glass displays allow users to work on 3D-based workflows using interactive real-time models with which users can interact hands-on as an AR visualisation via tracking cameras. The displays are suited for use cases such as product design procedures. The 16” and 32” displays allow professionals to collaborate over a single spatial display. Meanwhile, the Looking Glass Go product will enable consumers to turn 2D images into 3D spatial visualisations.

More information:

https://www.xrtoday.com/augmented-reality/new-3d-monitors-brings-spatial-computing-to-the-workplace-this-month/

21 October 2023

3D Holographic Displays Based on Deep Learning

A team of researchers from Chiba University propose a novel approach based on deep learning that further streamlines hologram generation by producing 3D images directly from regular 2D color images captured using ordinary cameras. The proposed approach employs three deep neural networks (DNNs) to transform a regular 2D color image into data that can be used to display a 3D scene or object as a hologram. The first DNN makes use of a color image captured using a regular camera as the input and then predicts the associated depth map, providing information about the 3D structure of the image. Both the original RGB image and the depth map created by the first DNN are then utilized by the second DNN to generate a hologram. 

The third DNN refines the hologram generated by the second DNN, making it suitable for display on different devices. The researchers found that the time taken by the proposed approach to process data and generate a hologram was superior to that of a state-of-the-art graphics processing unit. Soon, this approach can find potential applications in heads-up and head-mounted displays for generating high-fidelity 3D displays. Likewise, it can revolutionize the generation of an in-vehicle holographic head-up display, which may be able to present the necessary information on people, roads, and signs to passengers in 3D. The proposed approach is thus expected to pave the way for augmenting the development of ubiquitous holographic technology.

More information:

https://www.cn.chiba-u.jp/en/news/press-release_e231018/

03 July 2023

3D Reconstruction of Old Maps

Researchers at Ohio State University (OSU), the Mid-Ohio Regional Planning Commission, and Chicago-based marketing solutions provider Epsilon have converted old Sanborn Fire Insurance maps into three-dimensional digital models of historic neighbourhoods with a new machine learning (ML) technique. The researchers tested their machine learning technique on two adjacent neighbourhoods on the near east side of Columbus, Ohio, that were largely destroyed in the 1960s to make way for the construction of I-70. One of the neighbourhoods, Hanford Village, was developed in 1946 to house returning Black veterans of World War II. The other neighbourhood in the study was Driving Park, which also housed a thriving Black community until I-70 split it in two. The researchers used 13 Sanborn maps for the two neighbourhoods produced in 1961, just before I-70 was built. Machine learning techniques were able to extract the data from the maps and create digital models.

Comparing data from the Sanborn maps to today showed that a total of 380 buildings were demolished in the two neighbourhoods for the highway, including 286 houses, 86 garages, five apartments and three stores. Analysis of the results showed that the machine learning model was very accurate in recreating the information contained in the maps – about 90% accurate for building footprints and construction materials. Using the machine learning techniques developed for this study, researchers could develop similar 3D models for nearly any of the 12,000 cities and towns that have Sanborn maps. This will allow researchers to re-create neighbourhoods lost to natural disasters like floods, as well as urban renewal, depopulation, and other types of change. Because the Sanborn maps include information on businesses that occupied specific buildings, researchers could re-create digital neighbourhoods to determine the economic impact of losing them to urban renewal or other factors. 

More information:

https://news.osu.edu/turning-old-maps-into-3d-digital-models-of-lost-neighborhoods/