27 October 2023

Neural Activity During Zoom

A new study reveals a significant disparity in neural activity during face-to-face conversations compared to Zoom interactions. Using advanced neuroimaging, researchers observed suppressed neural signals during online exchanges.

In contrast, in-person discussions presented heightened brain activity, with more coordinated neural responses between participants, emphasizing the richness of live social interactions. The research suggests online faces, with present technology, don’t engage our social neural circuits as effectively.

More information:

https://neurosciencenews.com/zoom-conversations-social-neuroscience-24996/

26 October 2023

Movement Stability Improved by Robotic Prosthetic Ankle

A new study demonstrated that neural control of a powered prosthetic ankle can restore a range of abilities, including standing on challenging surfaces and squatting. Researchers worked with five people who had amputations below the knee on one leg. Study participants were fitted with a prototype robotic prosthetic ankle that responds to EMG signals that are picked up by sensors on the leg. Researchers conducted general training for study participants using the prototype device, so that they were somewhat familiar with the technology.

Study participants were then tasked with responding to an expected perturbation, meaning they had to respond to something that might throw off their balance. To replicate the conditions precisely over the course of the study, the researchers developed a mechanical system designed to challenge the stability of participants. Study participants were asked to respond to the expected perturbation under two conditions: using the prosthetic devices they normally used; and using the robotic prosthetic prototype. Results showed that study participants were significantly more stable when using the robotic prototype.

More information:

https://news.ncsu.edu/2023/10/robotic-ankles-move-naturally/

21 October 2023

3D Holographic Displays Based on Deep Learning

A team of researchers from Chiba University propose a novel approach based on deep learning that further streamlines hologram generation by producing 3D images directly from regular 2D color images captured using ordinary cameras. The proposed approach employs three deep neural networks (DNNs) to transform a regular 2D color image into data that can be used to display a 3D scene or object as a hologram. The first DNN makes use of a color image captured using a regular camera as the input and then predicts the associated depth map, providing information about the 3D structure of the image. Both the original RGB image and the depth map created by the first DNN are then utilized by the second DNN to generate a hologram. 

The third DNN refines the hologram generated by the second DNN, making it suitable for display on different devices. The researchers found that the time taken by the proposed approach to process data and generate a hologram was superior to that of a state-of-the-art graphics processing unit. Soon, this approach can find potential applications in heads-up and head-mounted displays for generating high-fidelity 3D displays. Likewise, it can revolutionize the generation of an in-vehicle holographic head-up display, which may be able to present the necessary information on people, roads, and signs to passengers in 3D. The proposed approach is thus expected to pave the way for augmenting the development of ubiquitous holographic technology.

More information:

https://www.cn.chiba-u.jp/en/news/press-release_e231018/