13 January 2026

OriRing VR Ring

Researchers at Sungkyunkwan University, in collaboration with EPFL, have developed a novel wearable haptic device called OriRing. This ring-shaped interface uses a 3-axis force sensor to provide users with realistic sensations of the weight and stiffness of virtual objects when interacting in VR. Unlike traditional haptic systems that rely on simple vibrations or bulky mechanisms, OriRing is ultra-lightweight (about 18 g) and capable of sensing multi-directional forces through micro-structured polymer surfaces, allowing precise tactile feedback directly at the fingertip.

Testing showed that users can not only perceive object properties like size and hardness but also adjust virtual object characteristics in real time using just finger movements. Because of its high force-to-weight performance and compact wearable form, OriRing offers advantages over glove-type devices and has potential applications beyond VR and gaming, such as rehabilitation, medical use, and remote robotic control. The research was published in Nature Electronics and marks a step forward in creating more immersive and physically grounded human-computer interaction technologies.

More information:

https://www.dongascience.com/en/news/75940

11 January 2026

China Proposes Rules to Regulate Human-Like AI Interactions

China’s cyberspace regulator on December released draft rules for public comment aimed at tightening oversight of artificial intelligence systems that mimic human personality traits and interact emotionally with users. The proposed regulations would apply to consumer-facing AI products and services in China that simulate human-like thinking, communication, and emotional engagement through text, images, audio, or video, signaling Beijing’s intent to shape the rapid rollout of such AI with stronger safety and ethical standards.

Under the draft framework, AI providers would be responsible for ensuring safety throughout the product lifecycle, including algorithm review, data security and personal information protections. Companies would have to warn users against excessive use, monitor emotional states and signs of addiction, and intervene when necessary. The rules also set “red lines” banning AI content that could threaten national security, spread rumors, or promote violence or obscenity, and are now open for public comment before finalization.

More information:

https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/

10 January 2026

Real-Time Speech-to-Text App for Deaf Users

A technology company in Nagoya has developed a new smartphone app designed to help people who are deaf or hard of hearing by converting spoken language into text in real time. The app uses advanced speech-recognition algorithms to transcribe what others are saying into readable captions instantly, filling a gap left by existing tools that often struggle with accuracy or lag.

This innovation is aimed at improving everyday communication for users, making conversations more accessible without needing a human interpreter or manual input. By leveraging recent advances in artificial intelligence and machine learning, the app can handle nuanced speech patterns and offer translations as well, supporting smoother interaction in various contexts such as social situations, work, or public services.

More information:

https://www.asahi.com/ajw/articles/16206171

08 January 2026

Humanoid Robots Learn to Work Like Humans

Boston Dynamics is increasingly using artificial intelligence to train its humanoid robot, Atlas, to perform real-world work tasks previously done by humans. In a recent 60 Minutes segment, the company showed how Atlas is being tested at Hyundai’s new Georgia factory, practicing duties like sorting roof racks on an assembly line. The modern Atlas blends machine learning with advanced hardware, using techniques like motion capture, simulation training, and direct human demonstration to learn movement and tasks that were once difficult to program manually. 

 

Boston Dynamics’ CEO and researchers acknowledge that while humanoids aren’t yet replacing large numbers of workers, they are poised to change the nature of labor by taking on repetitive or hazardous jobs, potentially relieving humans from backbreaking work and enabling operations in environments unsafe for people. They stress that robots will still require human oversight, maintenance, and training, and dismiss dystopian fears of autonomous machines running amok, even as the robotics industry races competitors globally and eyes a multi billion-dollar future market.

More information:

https://www.cbsnews.com/amp/news/boston-dynamics-training-ai-humanoids-to-perform-human-jobs-60-minutes/

31 December 2025

XR4ED Invited Talk at UnitedXR Europe 2025

On the 8th of December 2025, I presented at UnitedXR Europe in Brussels an overview and results of the XR4ED EU Project. XR4ED focuses on fostering innovation in education through extended reality (XR) technologies. The project created a sustainable, centralised platform where educators, developers, and learners can access XR tools, applications, and resources tailored for learning and training purposes. By uniting the EdTech and XR communities across multiple EU member states, XR4ED overcomes fragmentation in the digital education technology ecosystem and supports the development and market readiness of immersive educational solutions that go beyond traditional teaching methods.

Throughout the presentation, project results are highlighted, including efforts to create an open marketplace for XR content, support for start-ups and SMEs via open calls and grants, and building links with related initiatives to strengthen Europe’s leadership in XR for education. The XR4ED platform is designed to enable personalised, innovative, and inclusive learning experiences, facilitating hands-on engagement and skills-based teaching. XR4ED also considers ethical, privacy, and inclusivity standards as part of its ecosystem, while encouraging adoption of immersive tools across schools, universities, industry training, and research communities.

More information:

https://youtu.be/tAi76BlXWis

30 December 2025

Virtual Reality Brings Connection and Joy to Senior Living

Virtual reality is being used in retirement communities to help older adults combat social isolation and enrich their daily lives. Residents at places in California use VR headsets to virtually explore new places, revisit meaningful memories, and take part in shared activities like underwater swims or concerts. These immersive experiences often spark conversation, improve cognitive engagement, and strengthen connections among peers who may otherwise struggle with loneliness.

Researchers and caregivers see VR as especially accessible for seniors compared with other technologies, and early evidence suggests it can support emotional well-being and social interaction without replacing traditional activities. They emphasize that while VR should complement rather than replace real-world engagement, it has shown potential benefits for those with memory challenges, including positive responses to virtual hikes and other simulations.

More information:

https://apnews.com/article/virtual-reality-senior-living-social-isolation-b20dc156f4aa0735d7f0cc7558de9bfc

27 December 2025

Bridging Photos and Floor Plans with Computer Vision

Cornell University researchers have developed a new computer-vision method, that enables machines to match real-world images with simplified building layouts like floor plans with much greater accuracy. To train and evaluate their approach, the team compiled a large dataset called C3, containing about 90,000 paired photos and floor plans across nearly 600 scenes, with detailed annotations of pixel matches and camera poses. 

By reconstructing scenes in 3D from large internet photo collections and aligning them to publicly available architectural drawings, the dataset teaches models how real images relate to abstract representations. In tests, C3Po reduced matching errors by about 34% compared with earlier methods, suggesting that this multi-modal training could help future vision systems generalize across varied inputs and advance 3D computer vision research.

More information:

https://news.cornell.edu/stories/2025/12/computer-vision-connects-real-world-images-building-layouts

23 December 2025

Sharpa’s Dexterous Robotic Hand Enters Mass Production

Sharpa Robotics has announced that its flagship SharpaWave dexterous robotic hand has entered mass production, a major milestone for scaling human-level robot manipulation technology. The Singapore-based company has transitioned to a rolling production process with automated testing systems to ensure the reliability of the thousands of microscale gears, motors, and sensors inside each unit. Initial shipments began in October, and the rollout is timed ahead of SharpaWave’s showcase as a CES 2026 Innovation Awards honoree. Designed to match the size, strength, and precision of the human hand, the device has already attracted orders from global tech firms as part of efforts to make general-purpose robots practical and deployable outside of labs. 

SharpaWave features 22 active degrees of freedom and integrates proprietary Dynamic Tactile Array technology that combines visual and tactile sensing to detect forces as small as 0.005 newtons, enabling adaptive grip control and slip prevention. The hand is supported by an open, developer-friendly ecosystem, including the SharpaPilot software that works with popular simulation platforms like Isaac Gym, PyBullet, and MuJoCo, along with reinforcement-learning tools to speed up experimentation and integration. Certified for durability through one million uninterrupted grip cycles and built with safety-enhancing, backdrivable joints, the platform aims to bridge research and real-world robotic applications from delicate object handling to more robust manipulation tasks.

More information:

https://interestingengineering.com/ai-robotics/sharpas-advanced-robotic-hand-enters-mass-production