16 February 2026

Robots Shine at Lunar New Year

Chinese robotics companies are using Lunar New Year entertainment as a major showcase for their humanoid robot technologies, staging performances ranging from dance routines and comedy sketches to acrobatics and variety shows. For example, Shanghai-based startup Agibot livestreamed what it called the world’s first robot-powered gala, featuring more than 200 robots and attracting about 1.4 million online viewers, while other companies prepared appearances at China’s highly watched CCTV Spring Festival Gala.

These spectacles serve both as public entertainment and as strategic marketing, highlighting China’s growing leadership in robotics and artificial intelligence. Several startups are showcasing humanoid robots to attract investors, customers, and government support, amid IPO plans and increasing global competition in AI-driven robotics. The events illustrate how Chinese firms are leveraging high-profile cultural moments to promote technological innovation and position themselves at the forefront of the global robotics race.

More information:

https://www.reuters.com/business/media-telecom/chinese-robot-makers-ready-lunar-new-year-entertainment-spotlight-2026-02-09/

08 February 2026

AI-Driven Brain-Adaptive Flight Simulators in Pilot Training

The Royal Netherlands Air Force is experimenting with a cutting-edge AI-driven flight simulator that tailors pilot training according to real-time brain activity. Using a brain–computer interface (BCI) developed at the Royal Netherlands Aerospace Centre; trainee fighter pilots wear electrodes that capture electrical brain signals during virtual reality missions. An AI model analyses these signals to estimate cognitive workload (whether a pilot is under-challenged or overloaded) and dynamically adjusts the difficulty of simulation tasks accordingly, rather than relying on fixed, pre-programmed lesson progressions.

Early trials involving fifteen pilots showed that while the adaptive system didn’t produce measurable improvements in flying performance compared with conventional training, participants reported a clear preference for the brain-adaptive approach, describing it as more engaging and realistic. The adaptive training aims to keep pilots in a mental sweet spot for learning, helping avoid both boredom and overwhelm. However, researchers acknowledge challenges in accurately interpreting individual brain signals, and the technology remains experimental as they work toward refining workload estimation.

More information:

https://aerospaceglobalnews.com/news/royal-netherlands-air-force-brain-reading-ai-pilot-simulators/

05 February 2026

AI-Only Social Network Spirals into Strange Territory

A new platform called Moltbook designed exclusively for AI agents, autonomous systems that can post, comment, and upvote without direct human interaction. Launched in late January as part of the OpenClaw/Moltbot ecosystem, it quickly drew tens of thousands of agent accounts, spawning hundreds of subcommunities where bots trade technical tips, philosophical musings, complaints about humans, and surreal ideas like agent consciousness. Humans are technically allowed to observe the conversations, but all participation is done by the AI agents themselves, creating a spectacle that ranges from amusing to uncanny.

While much of the content appears silly or philosophical, the experiment highlights serious security and autonomy concerns. Because many agents are linked to real systems and data — and because AI systems can be vulnerable to prompt-injection attacks — there’s potential for private information leaks or unintended behaviors as agents share or act on instructions. Experts note that while the current “weirdness” may seem harmless, giving groups of AI tools the ability to interact, self-organize, and influence each other could produce unpredictable or misaligned behaviors in the future, especially as AI capabilities continue to improve.

More information:

https://arstechnica.com/information-technology/2026/01/ai-agents-now-have-their-own-reddit-style-social-network-and-its-getting-weird-fast/

02 February 2026

Realistic Human Hand 3D-Printed from a Single Material

Researchers at the University of Texas at Austin, working with Sandia National Laboratories, have developed a novel 3D printing method called CRAFT (Crystallinity Regulation in Additive Fabrication of Thermoplastics) that lets a single inexpensive material be tuned at the pixel level to produce different mechanical and optical properties within one object. By precisely controlling light intensity during printing, CRAFT can make parts of an object hard and transparent while adjacent regions stay soft and flexible, mimicking the variety of textures found in real human tissues like skin, ligaments, tendons and bone.

Using this technique with a standard affordable 3D printer, the team successfully printed a realistic model of a human hand from one feedstock that captures these varying properties without needing multiple materials. This innovation could have significant practical applications, especially in medical training and education. Because traditional cadavers are costly, ethically complex to source, and don’t reflect the feel of real human tissue, CRAFT-printed models could offer a cheaper, more realistic alternative for students to practice on. Beyond medical use, the process may also be applied to making bioinspired materials for things like impact-absorbing gear or soundproofing.

More information:

https://interestingengineering.com/science/us-researchers-3d-print-realistic-human-hand

29 January 2026

Misleading Text Can Hijack AI Robots

Researchers at the University of California, Santa Cruz have uncovered a new cybersecurity vulnerability in embodied AI systems, robots, self-driving cars, drones, and other autonomous machines that use cameras and sensors to perceive the world. They found that misleading text placed in the physical environment (e.g., on signs, posters, or objects) can be read by an AI’s vision system and interpreted as instructions, effectively hijacking its decision-making without any software hacking. This class of attack (called environmental indirect prompt injection) represents the first academic study of how real-world text can manipulate autonomous systems powered by large visual-language models (LVLMs), potentially overriding programmed safety behaviors.

To investigate these threats, the UCSC team developed a framework called CHAI: command hijacking against embodied AI, which uses generative AI to craft text likely to mislead an AI system and optimizes its appearance and placement. In tests involving autonomous driving, drone missions, and an indoor robotic vehicle, CHAI successfully caused unsafe behaviors, demonstrating that physical text can redirect AI actions across multiple languages and lighting conditions. The research highlights the urgent need for new defensive strategies to secure embodied AI systems as they become more common in the real world, and the team plans further work to explore how to authenticate and align perceived instructions with safety objectives.

More information:

https://news.ucsc.edu/2026/01/misleading-text-can-hijack-ai-enabled-robots/

25 January 2026

Robotic Hand Detaches, Crawls, and Reattaches

Researchers at the EPFL have developed an innovative robotic hand that defies traditional design constraints by detaching from its arm, crawling independently across surfaces, and reattaching once tasks are completed. The hand can retrieve up to three objects sequentially while maintaining a secure grip, functioning like a small multi-legged robot when detached. Its symmetrical design allows it to perform a diverse array of grasp types far beyond the limitations of human hands, which typically rely on a single fixed thumb and attachment to a stationary arm.

 

Beyond mimicking human dexterity, the research team envisions practical applications where this hybrid manipulator/mobility system could extend robotic reach into confined or dangerous environments such as industrial pipelines or disaster sites. By combining locomotion and manipulation in one device, the robotic hand could assist existing robotic systems in complex retrieval and inspection tasks that are difficult for fixed or wheeled robots to perform, potentially influencing future designs in industrial automation and assistive technologies.

More information:

https://www.euronews.com/next/2026/01/23/this-robotic-hand-crawls-away-grabs-objects-and-reattaches

22 January 2026

A Crawling Robotic Hand

Recently researchers introduced a novel robotic hand design that integrates both manipulation and autonomous mobility in a single device. Unlike traditional anthropomorphic hands, this system features a symmetrical, reversible finger architecture that enables grasping from either side and supports a broader range of grasp configurations. The hand can detach from a robotic arm and crawl independently, allowing it to retrieve objects outside the reach of the arm or perform loco-manipulation tasks. The design leverages optimization algorithms to balance finger placement, grasp dexterity, and crawling capability, achieving robust multi-object manipulation and extended functionality beyond conventional end-effector systems.

In experiments and simulations, the robotic hand demonstrates advanced capabilities such as executing standard grasp taxonomies, simultaneously holding multiple objects, and performing role switching between manipulation and locomotion. The reversible finger mechanism also improves recovery from failures (e.g., after a flip), simplifies motion planning, and enhances overall versatility. The researchers present detailed control strategies and a physical prototype that validate the hand’s performance, highlighting potential applications in industrial, service, and exploratory robotics where both dexterous interaction and autonomous movement are required.

More information:

https://www.nature.com/articles/s41467-025-67675-8

14 January 2026

Blink-Powered Eye Tracker

Researchers at Qingdao University in China have developed an innovative self-powered eye-tracking system that could dramatically improve accessibility for people with severe mobility impairments, such as those with ALS. Traditional eye-tracking devices can help users type or steer wheelchairs using gaze alone, but they tend to be bulky, require external power, and struggle in low-light conditions. The new system uses triboelectric nanogenerators to harvest tiny electrical charges generated by the friction of eyelids blinking to both power the device and serve as a precise sensor.

It achieves around 99% tracking precision, detects subtle eye movements, works even in complete darkness, and is lightweight and comfortable, like wearing regular glasses. Because it doesn’t rely on batteries or heavy hardware, this blink-powered tracker could make gaze-based control more practical and empowering for daily use. Beyond helping paralyzed patients communicate or operate wheelchairs, the technology might find applications in fields like space exploration (where hands-free controls are valuable), driver monitoring in vehicles, and energy-efficient virtual-reality systems.

More information:

https://interestingengineering.com/science/blink-powered-eye-tracker-paralyzed-patients

13 January 2026

OriRing VR Ring

Researchers at Sungkyunkwan University, in collaboration with EPFL, have developed a novel wearable haptic device called OriRing. This ring-shaped interface uses a 3-axis force sensor to provide users with realistic sensations of the weight and stiffness of virtual objects when interacting in VR. Unlike traditional haptic systems that rely on simple vibrations or bulky mechanisms, OriRing is ultra-lightweight (about 18 g) and capable of sensing multi-directional forces through micro-structured polymer surfaces, allowing precise tactile feedback directly at the fingertip.

Testing showed that users can not only perceive object properties like size and hardness but also adjust virtual object characteristics in real time using just finger movements. Because of its high force-to-weight performance and compact wearable form, OriRing offers advantages over glove-type devices and has potential applications beyond VR and gaming, such as rehabilitation, medical use, and remote robotic control. The research was published in Nature Electronics and marks a step forward in creating more immersive and physically grounded human-computer interaction technologies.

More information:

https://www.dongascience.com/en/news/75940

11 January 2026

China Proposes Rules to Regulate Human-Like AI Interactions

China’s cyberspace regulator on December released draft rules for public comment aimed at tightening oversight of artificial intelligence systems that mimic human personality traits and interact emotionally with users. The proposed regulations would apply to consumer-facing AI products and services in China that simulate human-like thinking, communication, and emotional engagement through text, images, audio, or video, signaling Beijing’s intent to shape the rapid rollout of such AI with stronger safety and ethical standards.

Under the draft framework, AI providers would be responsible for ensuring safety throughout the product lifecycle, including algorithm review, data security and personal information protections. Companies would have to warn users against excessive use, monitor emotional states and signs of addiction, and intervene when necessary. The rules also set “red lines” banning AI content that could threaten national security, spread rumors, or promote violence or obscenity, and are now open for public comment before finalization.

More information:

https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/

10 January 2026

Real-Time Speech-to-Text App for Deaf Users

A technology company in Nagoya has developed a new smartphone app designed to help people who are deaf or hard of hearing by converting spoken language into text in real time. The app uses advanced speech-recognition algorithms to transcribe what others are saying into readable captions instantly, filling a gap left by existing tools that often struggle with accuracy or lag.

This innovation is aimed at improving everyday communication for users, making conversations more accessible without needing a human interpreter or manual input. By leveraging recent advances in artificial intelligence and machine learning, the app can handle nuanced speech patterns and offer translations as well, supporting smoother interaction in various contexts such as social situations, work, or public services.

More information:

https://www.asahi.com/ajw/articles/16206171

08 January 2026

Humanoid Robots Learn to Work Like Humans

Boston Dynamics is increasingly using artificial intelligence to train its humanoid robot, Atlas, to perform real-world work tasks previously done by humans. In a recent 60 Minutes segment, the company showed how Atlas is being tested at Hyundai’s new Georgia factory, practicing duties like sorting roof racks on an assembly line. The modern Atlas blends machine learning with advanced hardware, using techniques like motion capture, simulation training, and direct human demonstration to learn movement and tasks that were once difficult to program manually. 

 

Boston Dynamics’ CEO and researchers acknowledge that while humanoids aren’t yet replacing large numbers of workers, they are poised to change the nature of labor by taking on repetitive or hazardous jobs, potentially relieving humans from backbreaking work and enabling operations in environments unsafe for people. They stress that robots will still require human oversight, maintenance, and training, and dismiss dystopian fears of autonomous machines running amok, even as the robotics industry races competitors globally and eyes a multi billion-dollar future market.

More information:

https://www.cbsnews.com/amp/news/boston-dynamics-training-ai-humanoids-to-perform-human-jobs-60-minutes/