27 December 2025

Bridging Photos and Floor Plans with Computer Vision

Cornell University researchers have developed a new computer-vision method, that enables machines to match real-world images with simplified building layouts like floor plans with much greater accuracy. To train and evaluate their approach, the team compiled a large dataset called C3, containing about 90,000 paired photos and floor plans across nearly 600 scenes, with detailed annotations of pixel matches and camera poses. 

By reconstructing scenes in 3D from large internet photo collections and aligning them to publicly available architectural drawings, the dataset teaches models how real images relate to abstract representations. In tests, C3Po reduced matching errors by about 34% compared with earlier methods, suggesting that this multi-modal training could help future vision systems generalize across varied inputs and advance 3D computer vision research.

More information:

https://news.cornell.edu/stories/2025/12/computer-vision-connects-real-world-images-building-layouts

23 December 2025

Sharpa’s Dexterous Robotic Hand Enters Mass Production

Sharpa Robotics has announced that its flagship SharpaWave dexterous robotic hand has entered mass production, a major milestone for scaling human-level robot manipulation technology. The Singapore-based company has transitioned to a rolling production process with automated testing systems to ensure the reliability of the thousands of microscale gears, motors, and sensors inside each unit. Initial shipments began in October, and the rollout is timed ahead of SharpaWave’s showcase as a CES 2026 Innovation Awards honoree. Designed to match the size, strength, and precision of the human hand, the device has already attracted orders from global tech firms as part of efforts to make general-purpose robots practical and deployable outside of labs. 

SharpaWave features 22 active degrees of freedom and integrates proprietary Dynamic Tactile Array technology that combines visual and tactile sensing to detect forces as small as 0.005 newtons, enabling adaptive grip control and slip prevention. The hand is supported by an open, developer-friendly ecosystem, including the SharpaPilot software that works with popular simulation platforms like Isaac Gym, PyBullet, and MuJoCo, along with reinforcement-learning tools to speed up experimentation and integration. Certified for durability through one million uninterrupted grip cycles and built with safety-enhancing, backdrivable joints, the platform aims to bridge research and real-world robotic applications from delicate object handling to more robust manipulation tasks.

More information:

https://interestingengineering.com/ai-robotics/sharpas-advanced-robotic-hand-enters-mass-production

16 December 2025

AI Co-Pilot for More Natural Prosthetic Hands

Researchers at the University of Utah have developed an AI co-pilot system for prosthetic bionic hands that uses advanced sensors and machine learning to make gripping and manipulation more intuitive and natural for users. By equipping commercial prosthetic hands with pressure and proximity sensors and training an AI model to interpret that data, the system can autonomously adjust finger positions and grip force in real time, significantly improving success rates in tasks like picking up fragile objects.

The shared-control approach balances human intention with AI assistance, reducing cognitive burden and addressing a major reason many amputees abandon their prosthetics. Early studies show greater dexterity and precision compared with traditional myoelectric control, and the team is exploring future enhancements like tighter neural integration to further blur the line between artificial and natural limb control as the technology moves toward real-world use.

More information:

https://arstechnica.com/ai/2025/12/scientists-built-an-ai-co-pilot-for-prosthetic-bionic-hands/

15 December 2025

Vine-Inspired Soft Robots That Lift Without Harm

MIT and Stanford engineers have created a soft, vine-like robotic gripper that uses inflatable tendrils to grow around, wrap, and gently lift objects from fragile items like glass vases to heavy loads like watermelons.

A watermelon being used to produce food

AI-generated content may be incorrect.

This bio-inspired design offers a gentler, more adaptable alternative to traditional rigid grippers and could be used in applications ranging from eldercare and patient transfers to agriculture, logistics, and industrial handling.

More information:

https://interestingengineering.com/ai-robotics/mit-stanford-robotic-vines-soft-gripper

06 December 2025

3D map Covering 2.75 Billion Buildings

Scientists at Technical University of Munich (TUM) have unveiled GlobalBuildingAtlas, the first global, high-resolution 3D map of Earth’s man-made environment. The atlas covers about 2.75 billion buildings around the world, using satellite imagery from 2019 and offering a resolution roughly 30 times finer than previous global building maps. 

Each structure is represented at a fine resolution of about 3 × 3 meters, enough to estimate building height, volume, and density. Around 97% of the buildings are modelled as simplified 3D LoD1 geometries, not highly detailed, but sufficient for large-scale computational modelling.

More information:

https://interestingengineering.com/innovation/first-high-resolution-3d-map

05 December 2025

AI Unlocks Medieval Jewish Manuscript Treasure Trove

Researchers working on the MiDRASH transcription project are using AI to unlock the vast holdings of the Cairo Geniza, a global archive of medieval Jewish manuscripts numbering over 400,000. Although the full collection has been digitized, only about a tenth of the documents had been transcribed before. Many items remained un-catalogued or existed only as fragmented images in Hebrew, Arabic, Aramaic, or Yiddish. The AI tool is now being trained to read and transcribe those ancient scripts, and to piece together disordered fragments into coherent documents.

The potential impact is enormous: with AI-enabled transcription and reconstruction, scholars can much more easily search, cross-reference and analyze these manuscripts. Already, for example, the project recovered a 16th-century Yiddish letter from a widow in Jerusalem to her son in Egypt, describing life during a plague, something that might have remained hidden without these tools. Ultimately, researchers hope this will allow a reconstruction of social, economic, religious, and intellectual life in medieval Jewish communities.

More information:

https://www.reuters.com/business/media-telecom/vast-trove-medieval-jewish-records-opened-up-by-ai-2025-11-26/

25 November 2025

Direct Access to Our Brains

Recent advances in neurotechnology including wearable brain-computer interfaces (BCIs), Neuralink implants, and AI-driven neural decoding are making it possible to translate brain activity into actions, speech, images and emotions, blurring the line between human cognition and digital systems. Devices ranging from MIT’s EEG-equipped glasses to Neuralink’s implanted chips demonstrate both the medical potential of BCIs and their growing commercial interest. These systems raise profound concerns: they can decode sensitive traits, track attention and emotion, and potentially manipulate mental states, opening possibilities for misuse by companies, governments or political actors.

As the neurotech industry rapidly expands, the risks of consumer devices collecting neural data with little regulation are becoming increasingly urgent. This growing capability has triggered global debates about neural privacy, cognitive liberty, and whether new neurorights are needed. Countries such as Chile and Spain, several U.S. states, and international bodies have begun exploring legal protections for identity, agency and mental privacy. Advocates argue that traditional human rights are insufficient for technologies that can read or alter neural processes, while others warn that proliferating new rights may cause legal confusion.

More information:

https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html

24 November 2025

PropType AR Interface

Researchers developed PropType, a novel AR interface that allows users to turn everyday objects (i.e., water bottles, mugs, books or soda cans) into usable typing surfaces. Instead of relying on floating virtual keyboards or external hardware, PropType overlays a virtual keyboard layout onto a physical object being held or manipulated, leveraging the object’s real tactile feedback and adapting the layout to the object’s shape and how the user grips it.

To create this system, the team conducted a study with 16 participants to understand how people hold different props and type using them; they then developed custom keyboard layouts and a configuration/editing tool so users can tailor their typing surface and visual feedback. Because people are already interacting with a tangible object, the approach promises better comfort (avoiding gorilla arm fatigue) and more intuitive text input in mobile or device-free AR scenarios.

More information:

https://interestingengineering.com/innovation/proptype-ar-interface-keyboard