30 January 2023

Complex Motions By Soft Robots

Scientists at Cornell University, the University of Delaware, and Israel's Technion-Israel Institute of Technology have enabled a soft robot to achieve complex motions via fluid-impelled actuators. The six-legged robot includes two syringe pumps and a linked series of elastomer bellows with slender tubes running in two parallel columns to facilitate antagonistic push-pull motions. The tubes induce viscosity, which distributes pressure unevenly and bends the actuator into different contortions and motion patterns.

Researchers connected a series of elastomer bellows with slender tubes, running in a pair of parallel columns, all in a closed system. They developed a full descriptive model that could predict the actuator’s possible motions and anticipate how different input pressures, geometries, and tube and below configurations achieve them – all with a single fluid input. That results in an actuator that can achieve far more complex motions, but without the multiple inputs and complex feedback control those previous methods required.

More information:

https://news.cornell.edu/stories/2023/01/soft-robots-harness-viscous-fluids-complex-motions

26 January 2023

BCI Speller Achieves 62 Words Per Minute

Eight years ago, a patient lost her power of speech because of ALS, or Lou Gehrig’s disease, which causes progressive paralysis. She can still make sounds, but her words have become unintelligible, leaving her reliant on a writing board or iPad to communicate. Now, after volunteering to receive a brain implant, the woman has been able to rapidly communicate phrases like “I don’t own my home” and “It’s just tough” at a rate approaching normal speech. People without speech deficits typically talk at a rate of about 160 words a minute. Even in an era of keyboards, thumb-typing, emojis, and internet abbreviations, speech remains the fastest form of human-to-human communication. The new research was carried out at Stanford University.

The BCI that researchers work with a small pad of sharp electrodes embedded in a person’s motor cortex, the brain region most involved in movement. This allows researchers to record activity from a few dozen neurons at once and find patterns that reflect what motions someone is thinking of, even if the person is paralyzed. Researchers wanted to know if neurons in the motor cortex contained useful information about speech movements, too. That is, could they detect how subject T12 was trying to move her mouth, tongue, and vocal cords as she attempted to talk? These are small, subtle movements, and just a few neurons contained enough information to let a computer program predict, with good accuracy, what words the patient was trying to say.

More information:

https://www.technologyreview.com/2023/01/24/1067226/an-als-patient-set-a-record-for-communicating-via-a-brain-implant-62-words-per-minute/

24 January 2023

Brainwaves Identify Music Being Listened To

Researchers at the University of Essex hope the project could lead to helping people with severe communication disabilities such as locked-in syndrome or stroke sufferers by decoding language signals within their brains through non-invasive techniques. Essex scientists wanted to find a less invasive way of decoding acoustic information from signals in the brain to identify and reconstruct a piece of music someone was listening to. Whilst there have been successful previous studies monitoring and reconstructing acoustic information from brain waves, many have used more invasive methods such as electrocortiography (ECoG) - which involves placing electrodes inside the skull to monitor the actual surface of the brain.

Researchers used a combination of two non-invasive methods - fMRI, which measures blood flow through the entire brain, and electroencephalogram (EEG), which measures what is happening in the brain in real time - to monitor a person’s brain activity whilst listening to a piece of music. Using a deep learning neural network model, the data was translated to reconstruct and identify the piece of music. Music is a complex acoustic signal, sharing many similarities with natural language, so the model could potentially be adapted to translate speech. The eventual goal of this strand of research would be to translate thought, which could offer an important aid in the future for people who struggle to communicate, such as those with locked-in syndrome.

More information:

https://www.essex.ac.uk/news/2023/01/19/decoding-brainwaves-to-identify-music-listened-to