25 November 2025

Direct Access to Our Brains

Recent advances in neurotechnology including wearable brain-computer interfaces (BCIs), Neuralink implants, and AI-driven neural decoding are making it possible to translate brain activity into actions, speech, images and emotions, blurring the line between human cognition and digital systems. Devices ranging from MIT’s EEG-equipped glasses to Neuralink’s implanted chips demonstrate both the medical potential of BCIs and their growing commercial interest. These systems raise profound concerns: they can decode sensitive traits, track attention and emotion, and potentially manipulate mental states, opening possibilities for misuse by companies, governments or political actors.

As the neurotech industry rapidly expands, the risks of consumer devices collecting neural data with little regulation are becoming increasingly urgent. This growing capability has triggered global debates about neural privacy, cognitive liberty, and whether new neurorights are needed. Countries such as Chile and Spain, several U.S. states, and international bodies have begun exploring legal protections for identity, agency and mental privacy. Advocates argue that traditional human rights are insufficient for technologies that can read or alter neural processes, while others warn that proliferating new rights may cause legal confusion.

More information:

https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html

24 November 2025

PropType AR Interface

Researchers developed PropType, a novel AR interface that allows users to turn everyday objects (i.e., water bottles, mugs, books or soda cans) into usable typing surfaces. Instead of relying on floating virtual keyboards or external hardware, PropType overlays a virtual keyboard layout onto a physical object being held or manipulated, leveraging the object’s real tactile feedback and adapting the layout to the object’s shape and how the user grips it.

To create this system, the team conducted a study with 16 participants to understand how people hold different props and type using them; they then developed custom keyboard layouts and a configuration/editing tool so users can tailor their typing surface and visual feedback. Because people are already interacting with a tangible object, the approach promises better comfort (avoiding gorilla arm fatigue) and more intuitive text input in mobile or device-free AR scenarios.

More information:

https://interestingengineering.com/innovation/proptype-ar-interface-keyboard

21 November 2025

Simulation of How Brain Works

Researchers from the Allen Institute in Seattle, together with collaborators in Japan, have created a highly detailed supercomputer simulation of the mouse cortex. They modeled nearly 10 million neurons with 26 billion synapses on Japan’s Fugaku supercomputer. Their simulation captures not just the broad structure, but also sub-cellular details: each neuron is represented as a tree of multiple interacting compartments. The program, called Neulite, was able to simulate one second of real-time brain activity in about 32 seconds of computing time, only about 32x slower than a living mouse, which is remarkable for a model of this scale and complexity.

Although this achievement is a major technical milestone, the scientists emphasize that it’s still a long way from modeling a full and biologically realistic brain. Their current simulation lacks important features like plasticity (how neurons rewire themselves) and neuromodulators (molecules that change how neurons behave). It also doesn’t yet capture detailed sensory inputs. The long-term ambition, however, is to simulate an entire brain and not just the cortex. For reference, while the simulated cortex has about 10 million neurons, a full mouse brain would have around 70 million, and a human cortex alone contains around 21 billion neurons.

More information:

https://www.geekwire.com/2025/simulation-mouse-brain/

16 November 2025

Mobile AI Audio Guide for Blind People Navigation

An AI-powered navigation app is transforming daily mobility for people who are visually impaired. By providing real-time audio descriptions of nearby shops, obstacles, traffic lights, vehicles, and pedestrians, the app offers a level of environmental awareness that traditional tools such as white canes, tactile paving, and audible signals cannot fully guarantee especially as quiet hybrid cars and reduced nighttime sound signals make navigation more challenging. Users report feeling safer, more independent, and more confident, even when traveling unfamiliar routes or returning home late at night.

The app was developed by a Japanese technology company that created an AI model trained to recognize key road features and guide users through voice instructions. Released in 2023 and downloaded tens of thousands of times, the app offers free core features such as route guidance and obstacle detection. Challenges remain such as difficulty detecting downward steps and GPS errors in dense urban environments, but the developers plan to continue improving accuracy and expanding functionality to support greater mobility and quality of life for visually impaired users.

More information:

https://www.asahi.com/ajw/articles/16120141

15 November 2025

ISMAR 2025 Article

Recently, a paper I co-authored with colleagues from CYENS was presented at 2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) in Daejeon, Korea. The paper is entitled "VR as a 'Drop-In' Well-Being Tool for Knowledge Workers" and explores how VR can meet the diverse physical and mental needs of knowledge workers. We developed Tranquil Loom, a VR app offering stretching, guided meditation, and open exploration across four environments. The app includes an AI assistant that suggests activities based on users’ emotional states.

We conducted a two-phase mixed-methods study: (1) interviews with 10 knowledge workers to guide the apps design, and (2) deployment with 35 participants gathering usage data, well-being measures, and interviews. Results showed increases in mindfulness and reductions in anxiety. Participants enjoyed both structured and open-ended activities, often using the app playfully. While AI suggestions were used infrequently, they prompted ideas for future personalization. Overall, participants viewed VR as a flexible, 'dropin' tool, highlighting its value for situational rather than prescriptive well-being support.

More information:

https://www.computer.org/csdl/proceedings-article/ismar/2025/876100b213/2byA7RS10ze

14 November 2025

Portable Observatory Monitors Eruptions

Researchers with Istituto Nazionale di Geofisica e Vulcanologia (INGV) deployed a suitcase-sized portable observatory named Setup for the Kinematic Acquisition of Explosive Eruptions (SKATE) on the volcano Stromboli in Italy. The device is equipped with highspeed cameras, thermal sensors, acoustic sensors and data-acquisition hardware, designed to autonomously monitor explosive eruptions.

A close up of a device

AI-generated content may be incorrect.

SKATE records synchronized thermal, visual and acoustic data, greatly reducing the time scientists must spend in hazardous zones and enabling analysis of more than a thousand explosion events between 2019 and 2024. The detailed high-frame-rate and multiparametric data help volcanologists better understand eruption dynamics and may feed into training libraries for automated warning systems.

More information:

https://spectrum.ieee.org/volcano-monitoring-stromboli-skate

09 November 2025

Knitting Machine Functions like a 3D Printer

A new prototype of a knitting machine creates solid, knitted shapes, adding stitches in any direction – forward, backward and diagonal – so users can construct a wide variety of shapes and add stiffness to different parts of the object.

Unlike traditional knitting, which yields a 2D sheet of stitches, this proof-of-concept machine – developed by researchers at Cornell and Carnegie Mellon University – functions more like a 3D printer, building up solid shapes with horizontal layers of stitches.

More information:

https://news.cornell.edu/stories/2025/11/knitting-machine-makes-solid-3d-objects

06 November 2025

AI Creates Fast Detailed 3D Maps

MIT researchers have built a new AI system that allows robots to create detailed 3D maps of complex environments within seconds. The technology could transform how search-and-rescue robots navigate collapsed mines or disaster sites.

A robot with wheels and a camera

AI-generated content may be incorrect.

The system combines recent advances in machine learning with classical computer vision principles. It can process an unlimited number of images from a robot’s onboard cameras, generating accurate 3D reconstructions while estimating the robot’s position in real time.

More information:

https://interestingengineering.com/innovation/ai-mapping-system-for-rescue-robots-mit