31 December 2025

XR4ED Invited Talk at UnitedXR Europe 2025

 

On the 8th of December 2025, I presented at UnitedXR Europe in Brussels an overview and results of the XR4ED EU Project. XR4ED focuses on fostering innovation in education through extended reality (XR) technologies. The project created a sustainable, centralised platform where educators, developers, and learners can access XR tools, applications, and resources tailored for learning and training purposes. By uniting the EdTech and XR communities across multiple EU member states, XR4ED overcomes fragmentation in the digital education technology ecosystem and supports the development and market readiness of immersive educational solutions that go beyond traditional teaching methods.

Throughout the presentation, project results are highlighted, including efforts to create an open marketplace for XR content, support for start-ups and SMEs via open calls and grants, and building links with related initiatives to strengthen Europe’s leadership in XR for education. The XR4ED platform is designed to enable personalised, innovative, and inclusive learning experiences, facilitating hands-on engagement and skills-based teaching. XR4ED also considers ethical, privacy, and inclusivity standards as part of its ecosystem, while encouraging adoption of immersive tools across schools, universities, industry training, and research communities.

More information:

https://youtu.be/tAi76BlXWis

30 December 2025

Virtual Reality Brings Connection and Joy to Senior Living

Virtual reality is being used in retirement communities to help older adults combat social isolation and enrich their daily lives. Residents at places in California use VR headsets to virtually explore new places, revisit meaningful memories, and take part in shared activities like underwater swims or concerts. These immersive experiences often spark conversation, improve cognitive engagement, and strengthen connections among peers who may otherwise struggle with loneliness.

Researchers and caregivers see VR as especially accessible for seniors compared with other technologies, and early evidence suggests it can support emotional well-being and social interaction without replacing traditional activities. They emphasize that while VR should complement rather than replace real-world engagement, it has shown potential benefits for those with memory challenges, including positive responses to virtual hikes and other simulations.

More information:

https://apnews.com/article/virtual-reality-senior-living-social-isolation-b20dc156f4aa0735d7f0cc7558de9bfc

27 December 2025

Bridging Photos and Floor Plans with Computer Vision

Cornell University researchers have developed a new computer-vision method, that enables machines to match real-world images with simplified building layouts like floor plans with much greater accuracy. To train and evaluate their approach, the team compiled a large dataset called C3, containing about 90,000 paired photos and floor plans across nearly 600 scenes, with detailed annotations of pixel matches and camera poses. 

By reconstructing scenes in 3D from large internet photo collections and aligning them to publicly available architectural drawings, the dataset teaches models how real images relate to abstract representations. In tests, C3Po reduced matching errors by about 34% compared with earlier methods, suggesting that this multi-modal training could help future vision systems generalize across varied inputs and advance 3D computer vision research.

More information:

https://news.cornell.edu/stories/2025/12/computer-vision-connects-real-world-images-building-layouts

23 December 2025

Sharpa’s Dexterous Robotic Hand Enters Mass Production

Sharpa Robotics has announced that its flagship SharpaWave dexterous robotic hand has entered mass production, a major milestone for scaling human-level robot manipulation technology. The Singapore-based company has transitioned to a rolling production process with automated testing systems to ensure the reliability of the thousands of microscale gears, motors, and sensors inside each unit. Initial shipments began in October, and the rollout is timed ahead of SharpaWave’s showcase as a CES 2026 Innovation Awards honoree. Designed to match the size, strength, and precision of the human hand, the device has already attracted orders from global tech firms as part of efforts to make general-purpose robots practical and deployable outside of labs. 

SharpaWave features 22 active degrees of freedom and integrates proprietary Dynamic Tactile Array technology that combines visual and tactile sensing to detect forces as small as 0.005 newtons, enabling adaptive grip control and slip prevention. The hand is supported by an open, developer-friendly ecosystem, including the SharpaPilot software that works with popular simulation platforms like Isaac Gym, PyBullet, and MuJoCo, along with reinforcement-learning tools to speed up experimentation and integration. Certified for durability through one million uninterrupted grip cycles and built with safety-enhancing, backdrivable joints, the platform aims to bridge research and real-world robotic applications from delicate object handling to more robust manipulation tasks.

More information:

https://interestingengineering.com/ai-robotics/sharpas-advanced-robotic-hand-enters-mass-production

16 December 2025

AI Co-Pilot for More Natural Prosthetic Hands

Researchers at the University of Utah have developed an AI co-pilot system for prosthetic bionic hands that uses advanced sensors and machine learning to make gripping and manipulation more intuitive and natural for users. By equipping commercial prosthetic hands with pressure and proximity sensors and training an AI model to interpret that data, the system can autonomously adjust finger positions and grip force in real time, significantly improving success rates in tasks like picking up fragile objects.

The shared-control approach balances human intention with AI assistance, reducing cognitive burden and addressing a major reason many amputees abandon their prosthetics. Early studies show greater dexterity and precision compared with traditional myoelectric control, and the team is exploring future enhancements like tighter neural integration to further blur the line between artificial and natural limb control as the technology moves toward real-world use.

More information:

https://arstechnica.com/ai/2025/12/scientists-built-an-ai-co-pilot-for-prosthetic-bionic-hands/

15 December 2025

Vine-Inspired Soft Robots That Lift Without Harm

MIT and Stanford engineers have created a soft, vine-like robotic gripper that uses inflatable tendrils to grow around, wrap, and gently lift objects from fragile items like glass vases to heavy loads like watermelons.

A watermelon being used to produce food

AI-generated content may be incorrect.

This bio-inspired design offers a gentler, more adaptable alternative to traditional rigid grippers and could be used in applications ranging from eldercare and patient transfers to agriculture, logistics, and industrial handling.

More information:

https://interestingengineering.com/ai-robotics/mit-stanford-robotic-vines-soft-gripper

06 December 2025

3D map Covering 2.75 Billion Buildings

Scientists at Technical University of Munich (TUM) have unveiled GlobalBuildingAtlas, the first global, high-resolution 3D map of Earth’s man-made environment. The atlas covers about 2.75 billion buildings around the world, using satellite imagery from 2019 and offering a resolution roughly 30 times finer than previous global building maps. 

Each structure is represented at a fine resolution of about 3 × 3 meters, enough to estimate building height, volume, and density. Around 97% of the buildings are modelled as simplified 3D LoD1 geometries, not highly detailed, but sufficient for large-scale computational modelling.

More information:

https://interestingengineering.com/innovation/first-high-resolution-3d-map

05 December 2025

AI Unlocks Medieval Jewish Manuscript Treasure Trove

Researchers working on the MiDRASH transcription project are using AI to unlock the vast holdings of the Cairo Geniza, a global archive of medieval Jewish manuscripts numbering over 400,000. Although the full collection has been digitized, only about a tenth of the documents had been transcribed before. Many items remained un-catalogued or existed only as fragmented images in Hebrew, Arabic, Aramaic, or Yiddish. The AI tool is now being trained to read and transcribe those ancient scripts, and to piece together disordered fragments into coherent documents.

The potential impact is enormous: with AI-enabled transcription and reconstruction, scholars can much more easily search, cross-reference and analyze these manuscripts. Already, for example, the project recovered a 16th-century Yiddish letter from a widow in Jerusalem to her son in Egypt, describing life during a plague, something that might have remained hidden without these tools. Ultimately, researchers hope this will allow a reconstruction of social, economic, religious, and intellectual life in medieval Jewish communities.

More information:

https://www.reuters.com/business/media-telecom/vast-trove-medieval-jewish-records-opened-up-by-ai-2025-11-26/

25 November 2025

Direct Access to Our Brains

Recent advances in neurotechnology including wearable brain-computer interfaces (BCIs), Neuralink implants, and AI-driven neural decoding are making it possible to translate brain activity into actions, speech, images and emotions, blurring the line between human cognition and digital systems. Devices ranging from MIT’s EEG-equipped glasses to Neuralink’s implanted chips demonstrate both the medical potential of BCIs and their growing commercial interest. These systems raise profound concerns: they can decode sensitive traits, track attention and emotion, and potentially manipulate mental states, opening possibilities for misuse by companies, governments or political actors.

As the neurotech industry rapidly expands, the risks of consumer devices collecting neural data with little regulation are becoming increasingly urgent. This growing capability has triggered global debates about neural privacy, cognitive liberty, and whether new neurorights are needed. Countries such as Chile and Spain, several U.S. states, and international bodies have begun exploring legal protections for identity, agency and mental privacy. Advocates argue that traditional human rights are insufficient for technologies that can read or alter neural processes, while others warn that proliferating new rights may cause legal confusion.

More information:

https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html

24 November 2025

PropType AR Interface

Researchers developed PropType, a novel AR interface that allows users to turn everyday objects (i.e., water bottles, mugs, books or soda cans) into usable typing surfaces. Instead of relying on floating virtual keyboards or external hardware, PropType overlays a virtual keyboard layout onto a physical object being held or manipulated, leveraging the object’s real tactile feedback and adapting the layout to the object’s shape and how the user grips it.

To create this system, the team conducted a study with 16 participants to understand how people hold different props and type using them; they then developed custom keyboard layouts and a configuration/editing tool so users can tailor their typing surface and visual feedback. Because people are already interacting with a tangible object, the approach promises better comfort (avoiding gorilla arm fatigue) and more intuitive text input in mobile or device-free AR scenarios.

More information:

https://interestingengineering.com/innovation/proptype-ar-interface-keyboard

21 November 2025

Simulation of How Brain Works

Researchers from the Allen Institute in Seattle, together with collaborators in Japan, have created a highly detailed supercomputer simulation of the mouse cortex. They modeled nearly 10 million neurons with 26 billion synapses on Japan’s Fugaku supercomputer. Their simulation captures not just the broad structure, but also sub-cellular details: each neuron is represented as a tree of multiple interacting compartments. The program, called Neulite, was able to simulate one second of real-time brain activity in about 32 seconds of computing time, only about 32x slower than a living mouse, which is remarkable for a model of this scale and complexity.

Although this achievement is a major technical milestone, the scientists emphasize that it’s still a long way from modeling a full and biologically realistic brain. Their current simulation lacks important features like plasticity (how neurons rewire themselves) and neuromodulators (molecules that change how neurons behave). It also doesn’t yet capture detailed sensory inputs. The long-term ambition, however, is to simulate an entire brain and not just the cortex. For reference, while the simulated cortex has about 10 million neurons, a full mouse brain would have around 70 million, and a human cortex alone contains around 21 billion neurons.

More information:

https://www.geekwire.com/2025/simulation-mouse-brain/

16 November 2025

Mobile AI Audio Guide for Blind People Navigation

An AI-powered navigation app is transforming daily mobility for people who are visually impaired. By providing real-time audio descriptions of nearby shops, obstacles, traffic lights, vehicles, and pedestrians, the app offers a level of environmental awareness that traditional tools such as white canes, tactile paving, and audible signals cannot fully guarantee especially as quiet hybrid cars and reduced nighttime sound signals make navigation more challenging. Users report feeling safer, more independent, and more confident, even when traveling unfamiliar routes or returning home late at night.

The app was developed by a Japanese technology company that created an AI model trained to recognize key road features and guide users through voice instructions. Released in 2023 and downloaded tens of thousands of times, the app offers free core features such as route guidance and obstacle detection. Challenges remain such as difficulty detecting downward steps and GPS errors in dense urban environments, but the developers plan to continue improving accuracy and expanding functionality to support greater mobility and quality of life for visually impaired users.

More information:

https://www.asahi.com/ajw/articles/16120141

15 November 2025

ISMAR 2025 Article

Recently, a paper I co-authored with colleagues from CYENS was presented at 2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) in Daejeon, Korea. The paper is entitled "VR as a 'Drop-In' Well-Being Tool for Knowledge Workers" and explores how VR can meet the diverse physical and mental needs of knowledge workers. We developed Tranquil Loom, a VR app offering stretching, guided meditation, and open exploration across four environments. The app includes an AI assistant that suggests activities based on users’ emotional states.

We conducted a two-phase mixed-methods study: (1) interviews with 10 knowledge workers to guide the apps design, and (2) deployment with 35 participants gathering usage data, well-being measures, and interviews. Results showed increases in mindfulness and reductions in anxiety. Participants enjoyed both structured and open-ended activities, often using the app playfully. While AI suggestions were used infrequently, they prompted ideas for future personalization. Overall, participants viewed VR as a flexible, 'dropin' tool, highlighting its value for situational rather than prescriptive well-being support.

More information:

https://www.computer.org/csdl/proceedings-article/ismar/2025/876100b213/2byA7RS10ze

14 November 2025

Portable Observatory Monitors Eruptions

Researchers with Istituto Nazionale di Geofisica e Vulcanologia (INGV) deployed a suitcase-sized portable observatory named Setup for the Kinematic Acquisition of Explosive Eruptions (SKATE) on the volcano Stromboli in Italy. The device is equipped with highspeed cameras, thermal sensors, acoustic sensors and data-acquisition hardware, designed to autonomously monitor explosive eruptions.

A close up of a device

AI-generated content may be incorrect.

SKATE records synchronized thermal, visual and acoustic data, greatly reducing the time scientists must spend in hazardous zones and enabling analysis of more than a thousand explosion events between 2019 and 2024. The detailed high-frame-rate and multiparametric data help volcanologists better understand eruption dynamics and may feed into training libraries for automated warning systems.

More information:

https://spectrum.ieee.org/volcano-monitoring-stromboli-skate

09 November 2025

Knitting Machine Functions like a 3D Printer

A new prototype of a knitting machine creates solid, knitted shapes, adding stitches in any direction – forward, backward and diagonal – so users can construct a wide variety of shapes and add stiffness to different parts of the object.

Unlike traditional knitting, which yields a 2D sheet of stitches, this proof-of-concept machine – developed by researchers at Cornell and Carnegie Mellon University – functions more like a 3D printer, building up solid shapes with horizontal layers of stitches.

More information:

https://news.cornell.edu/stories/2025/11/knitting-machine-makes-solid-3d-objects

06 November 2025

AI Creates Fast Detailed 3D Maps

MIT researchers have built a new AI system that allows robots to create detailed 3D maps of complex environments within seconds. The technology could transform how search-and-rescue robots navigate collapsed mines or disaster sites.

A robot with wheels and a camera

AI-generated content may be incorrect.

The system combines recent advances in machine learning with classical computer vision principles. It can process an unlimited number of images from a robot’s onboard cameras, generating accurate 3D reconstructions while estimating the robot’s position in real time.

More information:

https://interestingengineering.com/innovation/ai-mapping-system-for-rescue-robots-mit

29 October 2025

Amazon’s Delivery Glasses

Amazon is developing wearable smart-glasses for its Delivery Service Partner (DSP) drivers to enable a truly hands-free delivery workflow. The glasses deploy AI-powered computer-vision and heads-up display functionality so that a driver can scan packages, follow walking turn-by-turn navigation and receive delivery instructions directly in their line of sight, thus eliminating frequent phone glances and helping maintain attention on the road or the delivery environment.

The company emphasises driver-input in the design: hundreds of Delivery Associates tested early versions of the glasses, and their feedback influenced ergonomics, display clarity, battery design and safety-features. Looking ahead, Amazon plans to evolve the platform with features like real-time defect-detection, ambient hazard alerts (e.g., pets, lighting) and seamless integration across the full delivery journey, from the station, to vehicle, to doorstep.

More information:

https://www.aboutamazon.com/news/transportation/smart-glasses-amazon-delivery-drivers

27 October 2025

Man With Brain Implant Controls Another Person’s Hand

Researchers at the Feinstein Institutes for Medical Research and the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell have demonstrated a cutting-edge BCI that allows a person with paralysis to control another person’s hand and even feel what she feels. In the experiment, a man with a spinal cord injury used implanted sensors in his motor cortex, along with AI decoding and flexible electrode patches, to send signals to another volunteer’s arm, enabling her to perform tasks such as pouring water. 

 

This inter-human neural bypass opens new possibilities for cooperative rehabilitation, where users with different degrees of mobility can work together: the paralysed man helped a woman with partial paralysis improve her hand strength, while experiencing control and touch again himself. The trial suggests that by closing both movement and sensation loops, the technology could restore more natural sensorimotor function and perhaps motivate the body to repair itself. However, the study is still limited to a small group and long-term outcomes remain to be seen.

More information:

https://singularityhub.com/2025/10/23/one-mind-two-bodies-man-with-brain-implant-controls-another-persons-hand-and-feels-what-she-feels/