06 December 2025

3D map Covering 2.75 Billion Buildings

Scientists at Technical University of Munich (TUM) have unveiled GlobalBuildingAtlas, the first global, high-resolution 3D map of Earth’s man-made environment. The atlas covers about 2.75 billion buildings around the world, using satellite imagery from 2019 and offering a resolution roughly 30 times finer than previous global building maps. 

Each structure is represented at a fine resolution of about 3 × 3 meters, enough to estimate building height, volume, and density. Around 97% of the buildings are modelled as simplified 3D LoD1 geometries, not highly detailed, but sufficient for large-scale computational modelling.

More information:

https://interestingengineering.com/innovation/first-high-resolution-3d-map

05 December 2025

AI Unlocks Medieval Jewish Manuscript Treasure Trove

Researchers working on the MiDRASH transcription project are using AI to unlock the vast holdings of the Cairo Geniza, a global archive of medieval Jewish manuscripts numbering over 400,000. Although the full collection has been digitized, only about a tenth of the documents had been transcribed before. Many items remained un-catalogued or existed only as fragmented images in Hebrew, Arabic, Aramaic, or Yiddish. The AI tool is now being trained to read and transcribe those ancient scripts, and to piece together disordered fragments into coherent documents.

The potential impact is enormous: with AI-enabled transcription and reconstruction, scholars can much more easily search, cross-reference and analyze these manuscripts. Already, for example, the project recovered a 16th-century Yiddish letter from a widow in Jerusalem to her son in Egypt, describing life during a plague, something that might have remained hidden without these tools. Ultimately, researchers hope this will allow a reconstruction of social, economic, religious, and intellectual life in medieval Jewish communities.

More information:

https://www.reuters.com/business/media-telecom/vast-trove-medieval-jewish-records-opened-up-by-ai-2025-11-26/

25 November 2025

Direct Access to Our Brains

Recent advances in neurotechnology including wearable brain-computer interfaces (BCIs), Neuralink implants, and AI-driven neural decoding are making it possible to translate brain activity into actions, speech, images and emotions, blurring the line between human cognition and digital systems. Devices ranging from MIT’s EEG-equipped glasses to Neuralink’s implanted chips demonstrate both the medical potential of BCIs and their growing commercial interest. These systems raise profound concerns: they can decode sensitive traits, track attention and emotion, and potentially manipulate mental states, opening possibilities for misuse by companies, governments or political actors.

As the neurotech industry rapidly expands, the risks of consumer devices collecting neural data with little regulation are becoming increasingly urgent. This growing capability has triggered global debates about neural privacy, cognitive liberty, and whether new neurorights are needed. Countries such as Chile and Spain, several U.S. states, and international bodies have begun exploring legal protections for identity, agency and mental privacy. Advocates argue that traditional human rights are insufficient for technologies that can read or alter neural processes, while others warn that proliferating new rights may cause legal confusion.

More information:

https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html

24 November 2025

PropType AR Interface

Researchers developed PropType, a novel AR interface that allows users to turn everyday objects (i.e., water bottles, mugs, books or soda cans) into usable typing surfaces. Instead of relying on floating virtual keyboards or external hardware, PropType overlays a virtual keyboard layout onto a physical object being held or manipulated, leveraging the object’s real tactile feedback and adapting the layout to the object’s shape and how the user grips it.

To create this system, the team conducted a study with 16 participants to understand how people hold different props and type using them; they then developed custom keyboard layouts and a configuration/editing tool so users can tailor their typing surface and visual feedback. Because people are already interacting with a tangible object, the approach promises better comfort (avoiding gorilla arm fatigue) and more intuitive text input in mobile or device-free AR scenarios.

More information:

https://interestingengineering.com/innovation/proptype-ar-interface-keyboard

21 November 2025

Simulation of How Brain Works

Researchers from the Allen Institute in Seattle, together with collaborators in Japan, have created a highly detailed supercomputer simulation of the mouse cortex. They modeled nearly 10 million neurons with 26 billion synapses on Japan’s Fugaku supercomputer. Their simulation captures not just the broad structure, but also sub-cellular details: each neuron is represented as a tree of multiple interacting compartments. The program, called Neulite, was able to simulate one second of real-time brain activity in about 32 seconds of computing time, only about 32x slower than a living mouse, which is remarkable for a model of this scale and complexity.

Although this achievement is a major technical milestone, the scientists emphasize that it’s still a long way from modeling a full and biologically realistic brain. Their current simulation lacks important features like plasticity (how neurons rewire themselves) and neuromodulators (molecules that change how neurons behave). It also doesn’t yet capture detailed sensory inputs. The long-term ambition, however, is to simulate an entire brain and not just the cortex. For reference, while the simulated cortex has about 10 million neurons, a full mouse brain would have around 70 million, and a human cortex alone contains around 21 billion neurons.

More information:

https://www.geekwire.com/2025/simulation-mouse-brain/

16 November 2025

Mobile AI Audio Guide for Blind People Navigation

An AI-powered navigation app is transforming daily mobility for people who are visually impaired. By providing real-time audio descriptions of nearby shops, obstacles, traffic lights, vehicles, and pedestrians, the app offers a level of environmental awareness that traditional tools such as white canes, tactile paving, and audible signals cannot fully guarantee especially as quiet hybrid cars and reduced nighttime sound signals make navigation more challenging. Users report feeling safer, more independent, and more confident, even when traveling unfamiliar routes or returning home late at night.

The app was developed by a Japanese technology company that created an AI model trained to recognize key road features and guide users through voice instructions. Released in 2023 and downloaded tens of thousands of times, the app offers free core features such as route guidance and obstacle detection. Challenges remain such as difficulty detecting downward steps and GPS errors in dense urban environments, but the developers plan to continue improving accuracy and expanding functionality to support greater mobility and quality of life for visually impaired users.

More information:

https://www.asahi.com/ajw/articles/16120141

15 November 2025

ISMAR 2025 Article

Recently, a paper I co-authored with colleagues from CYENS was presented at 2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) in Daejeon, Korea. The paper is entitled "VR as a 'Drop-In' Well-Being Tool for Knowledge Workers" and explores how VR can meet the diverse physical and mental needs of knowledge workers. We developed Tranquil Loom, a VR app offering stretching, guided meditation, and open exploration across four environments. The app includes an AI assistant that suggests activities based on users’ emotional states.

We conducted a two-phase mixed-methods study: (1) interviews with 10 knowledge workers to guide the apps design, and (2) deployment with 35 participants gathering usage data, well-being measures, and interviews. Results showed increases in mindfulness and reductions in anxiety. Participants enjoyed both structured and open-ended activities, often using the app playfully. While AI suggestions were used infrequently, they prompted ideas for future personalization. Overall, participants viewed VR as a flexible, 'dropin' tool, highlighting its value for situational rather than prescriptive well-being support.

More information:

https://www.computer.org/csdl/proceedings-article/ismar/2025/876100b213/2byA7RS10ze

14 November 2025

Portable Observatory Monitors Eruptions

Researchers with Istituto Nazionale di Geofisica e Vulcanologia (INGV) deployed a suitcase-sized portable observatory named Setup for the Kinematic Acquisition of Explosive Eruptions (SKATE) on the volcano Stromboli in Italy. The device is equipped with highspeed cameras, thermal sensors, acoustic sensors and data-acquisition hardware, designed to autonomously monitor explosive eruptions.

A close up of a device

AI-generated content may be incorrect.

SKATE records synchronized thermal, visual and acoustic data, greatly reducing the time scientists must spend in hazardous zones and enabling analysis of more than a thousand explosion events between 2019 and 2024. The detailed high-frame-rate and multiparametric data help volcanologists better understand eruption dynamics and may feed into training libraries for automated warning systems.

More information:

https://spectrum.ieee.org/volcano-monitoring-stromboli-skate