Artificial intelligence is being developed for a wide
range of assistive technology tools, from prosthetic hands to better hearing
aids. Deep learning models can provide a synthesized voice for individuals with
impaired speech, help the blind see, and translate sign language into text. One
reason assistive device developers turn to deep learning is because it works
well for decoding noisy signals, like electrical activity from the brain.
Using an NVIDIA Quadro GPU, a
deep learning neural decoder (the algorithm that translates neural activity
into intended command signals) was trained on brain signals from scripted
sessions with Burkhart, where he was asked to think about executing specific
hand motions. The neural network learned which brain signals corresponded to
which desired movements. However, a key challenge in creating robust neural
decoding systems is that brain signals vary from day to day.
More information: