29 September 2019

Frontiers in Human Neuroscience Article 2019

A few days ago, HCI Lab researchers, published a paper at Frontiers in Human Neuroscience entitled ‘Progressive Training for Motor Imagery Brain-Computer Interfaces Using Gamification and Virtual Reality Embodiment’. The paper presents a gamified motor imagery brain-computer interface (MI-BCI) training in immersive virtual reality. The aim of the proposed training method is to increase engagement, attention, and motivation in co-adaptive event-driven MI-BCI training. 


This was achieved using gamification, progressive increase of the training pace, and virtual reality design reinforcing body ownership transfer (embodiment) into the avatar. From the 20 healthy participants performing 6 runs of 2-class MI-BCI training (left/right hand), 19 were trained for a basic level of MI-BCI operation, with average peak accuracy in the session = 75.84%. This confirms the proposed training method succeeded in improvement of the MI-BCI skills.

More information:

27 September 2019

Sonogenetics Control the Behavior of Brain Cells

Neuroscientists are always looking for ways to influence neurons in living brains so that we can analyze the outcome and understand both how that brain works and how to better treat brain disorders. For the last two decades the go-to tool for researchers in my field has been optogenetics, a technique in which engineered brain cells in animals are controlled with light. This process involves inserting an optic fiber deep within the animal’s brain to deliver light to the target region. Ultrasound is a great way to control cells. Since sound is a form of mechanical energy, if brain cells could be made mechanically sensitive, then they can be modified with ultrasound. 


This research allowed the discovery of the first naturally occurring protein mechanical detector that made brain cells sensitive to ultrasound. The technology works in two stages. First new genetic material is introduced into malfunctioning brain cells using a virus as a delivery device. This provides the instructions for these cells to make the ultrasound-responsive proteins. The next step is emitting ultrasound pulses from a device outside the animal’s body targeting the cells with the sound-sensitive proteins. The ultrasound pulse remotely activates the cells. Researchers discovered that neurons with the TRP-4 protein are sensitive to ultrasonic frequencies.

More information:

25 September 2019

Boston Dynamics’ Robot Dog On Sale

Boston Dynamics has started selling its four-legged Spot robot, but you probably won’t be able to get your hands on one-yet. The company is only going to sell the robot to companies that can put it to practical use and develop custom modules that can be attached to its back to help perform specific tasks. It’s the reverse of the traditional sales process: firms need to send pitches to Boston Dynamics, which it will then assess them for suitability.


Boston Dynamics only has 20 of the robots available right now, but it’s hoping to manufacture about 1,000 for use out in the field. So it has to be very choosy about who gets one. It hasn’t disclosed how much they will cost. Spot could check for gas leaks using methane sensors, map the interior of a building with a lidar module, or even open doors using its arm. The robot is designed to withstand rain, so it can work outdoors, too.

More information:

22 September 2019

Machine Learning Reconstructs Deteriorated Drawings

Researchers at TU Delft in the Netherlands have recently developed a convolutional neural network (CNN)-based model to reconstruct drawings that have deteriorated over time. In their study, published in Springer's Machine Vision and Applications, they specifically used the model to reconstruct some of Vincent Van Gogh's drawings that were ruined over the years due to ink fading and discoloration. Researchers investigated the use of machine-learning techniques for the pixel-wise reconstruction of deteriorated paintings. When it comes to art preservation, the deterioration of paintings and drawings is a key challenge, so tools that can automatically reconstruct incomplete or ruined artworks would greatly simplify the work of art historians.


They trained their CNN-based model on reproductions of deteriorated drawings by post-impressionist painter Van Gogh. Some of Van Gogh's ink drawings have deteriorated significantly over the past century, and art historians have often tried to reproduce them. These drawings cannot currently be exhibited. Researchers wanted to develop a model that can automatically reconstruct these invaluable artworks in order to preserve them and make them accessible to the public. The approach combines techniques for multi-resolution image analysis and deep CNNs to predict the past appearances of drawings pixel-wise. The algorithm was trained on a dataset containing reproductions of the original drawings of varying quality, made at different times during past century.

More information:

21 September 2019

Nanomembrane Wearable Brain Machine Interface

Combining new classes of nanomembrane electrodes with flexible electronics and a deep learning algorithm could help disabled people wirelessly control an electric wheelchair, interact with a computer or operate a small robotic vehicle without donning a bulky hair-electrode cap or contending with wires. By providing a fully portable, wireless brain-machine interface (BMI), the wearable system could offer an improvement over conventional electroencephalography (EEG) for measuring signals from visually evoked potentials in the human brain. The system's ability to measure EEG signals for BMI has been evaluated with six human subjects, but has not been studied with disabled individuals. The project was conducted by researchers from the Georgia Institute of Technology, University of Kent and Wichita State University. BMI is an essential part of rehabilitation technology that allows those with amyotrophic lateral sclerosis (ALS), chronic stroke or other severe motor disabilities to control prosthetic systems. Gathering brain signals known as steady-state virtually evoked potentials (SSVEP) now requires use of an electrode-studded hair cap that uses wet electrodes, adhesives and wires to connect with computer equipment that interprets the signals. 


Researchers are taking advantage of a new class of flexible, wireless sensors and electronics that can be easily applied to the skin. The system includes three primary components: highly flexible, hair-mounted electrodes that make direct contact with the scalp through hair; an ultrathin nanomembrane electrode; and soft, flexible circuity with a Bluetooth telemetry unit. The recorded EEG data from the brain is processed in the flexible circuitry, and then wirelessly delivered to a tablet computer via Bluetooth from up to 15 meters away. Beyond the sensing requirements, detecting and analyzing SSVEP signals have been challenging because of the low signal amplitude, which is in the range of tens of micro-volts, similar to electrical noise in the body. Researchers also must deal with variation in human brains. Yet accurately measuring the signals is essential to determining what the user wants the system to do. To address those challenges, the research team turned to deep learning neural network algorithms running on the flexible circuitry. In addition, the researchers used deep learning models to identify which electrodes are the most useful for gathering information to classify EEG signals. The system was evaluated with six human subjects.

More information: