21 April 2020

Insightful Ideas Can Trigger Orgasmic Brain Signals

Coming up with a great insight can cause pleasure like an orgasm, according to researchers. The eureka moment triggers neural reward signals that can flood some people with pleasure, suggesting it's an evolutionary adaptation that fuels the growth of creativity. A recent neuroimaging study from Drexel University discovered that the brain rewards systems of people with higher reward sensitivity ratings showed bursts of gamma EEG activity when they had creative insights. This signal is similar to those caused by pleasure-inducing experiences like orgasms, great food, or drinks that quench thirst. In carrying out the study, the scientists employed high-density electroencephalograms (EEGs) to track the brain activity of participants who were solving anagram puzzles. 


The subjects were required to unscramble letters in order to figure out a hidden word. When they had an aha moment of insight, figuring out the solution, the people would press a button, as EEG captured a snapshot of their brain activity. Another part of the study included filling out a questionnaire intended to gauge a person's reward sensitivity. The scientists found that people scoring high on this rubric had very powerful aha moments. Their brain scans showed an extra burst of high-frequency gamma waves in the reward systems' orbitofrontal cortex. People who scored low on reward sensitivity didn't exhibit such bursts. The researchers wrote that the eureka moments were noticed by them but were lacking in hedonic content.

More information:

14 April 2020

MIT Tries Hacking Your Dreams

A team of researchers at MIT’s Dream Lab, which launched in 2017, are working on an open source wearable device that can track and interact with dreams in a number of ways — including, hopefully, giving you new control over the content of your dreams. The team’s radical goal is to prove once and for all that dreams aren’t just meaningless gibberish — but can be hacked, augmented, and swayed to our benefit. A glove-like device called Dormio, developed by the Dream Lab team, is outfitted with a host of sensors that can detect which sleeping state the wearer is in. When the wearer slips into a state between conscious and subconscious, hypnagogia, the glove plays a pre-recorded audio cue, most of the times consisting of a single word. Hypnagogic imagery or hallucinations is a normal state of consciousness in the transition from wakefulness to sleep. Hypnagogia may be different for different people. Some say they’ve woken up from hypnagogia, reporting they experienced strong visual and auditory hallucinations. Others are capable of interacting with somebody in the state. But the Dream Lab might be on to something with its Dormio glove. 


In a 50-person experiment, the speaking glove was able to insert a tiger into people’s sleep by having the glove say a prerecorded message that simply said tiger. The device is meant to democratize the science of tracking sleep. Step-by-step instructions were posted online with biosignal tracking software available on Github, allowing everybody to theoretically make their own Dormio glove. A similar device built by Dream Lab researcher relies on smell rather than an audio cue. A preset scent is released by a device when the user reaches the N3 stage of sleep, a regenerative period when the body heals itself and consolidates memory. The idea is to strengthen this consolidation using scents. They hope to let sleepers take full control of their dreams as well. The problem, however, is that the science behind lucid dreaming is still murky. Only an estimated one percent of people are capable of entering this state regularly, making it difficult to study. The brain state during lucid dreaming is also not understood very well yet. But other researchers are convinced there’s plenty to gain from learning from our subconscious — rather than commanding it with prerecorded messages or scents.

More information:

08 April 2020

FundamentalVR Surgical Training Platform

Accredited educational simulations for medical professionals from London- and Boston-based FundamentalVR are coming to standalone VR headsets for the first time. FundamentalVR, whose surgical training simulations are accredited for professional development by the likes of the Royal College of Surgeons of England, has upgraded its Fundamental Surgery platform with @HomeVR, which will be available on headsets such as the Oculus Quest. 


It enhances the Fundamental Surgery VR learning platform by enabling surgeons to experience the same sights, sounds and feelings they would in a real procedure using haptic feedback. Utilising PCs/laptops, a VR headset and haptic arms, the platform enables surgeons to hone and rehearse skills in a safe and measurable environment. A single user login ensures a ubiquitous experience with the same high-fidelity graphics, education content, and data tracking capabilities across each modality.

More information:

07 April 2020

Indoor Robot Navigation Among Humans

To tackle the tasks that they are designed to complete, mobile robots should be able to navigate real world environments efficiently, avoiding humans or other obstacles in their surroundings. While static objects are typically fairly easy for robots to detect and circumvent, avoiding humans can be more challenging, as it entails predicting their future movements and planning accordingly. Researchers at the University of California, Berkeley, have recently developed a new framework that could enhance robot navigation among humans in indoor environments such as offices, homes or museums. The new framework these researchers developed, dubbed LB-WayPtNav-DH, has three key components: a perception, a planning, and a control module. The perception module is based on a convolutional neural network (CNN) that was trained to map the robot's visual input into a waypoint using supervised learning. The waypoint mapped by the CNN is then fed to the framework's planning and control modules. Combined, these two modules ensure that the robot moves to its target location safely, avoiding any obstacles and humans in its surroundings. Image explaining what the HumANav dataset contains and how it achieves photorealistic rendering of indoor environments containing humans. 


The researchers trained their CNN on images included in a dataset they compiled, dubbed HumANav. HumANav contains photorealistic, rendered images of simulated building environments in which humans are moving around, adapted from another dataset called SURREAL. These images portray 6000 walking, textured human meshes, arranged by body shape, gender and velocity. The researchers evaluated LB-WayPtNav-DH in a series of experiments, both in simulations and in the real world. In real-world experiments, they applied it to Turtlebot 2, a low-cost mobile robot with open-source software. The researchers report that the robot navigation framework generalizes well to unseen buildings, effectively circumventing humans both in simulated and real-world environments. The new framework could ultimately be applied to a variety of mobile robots, enhancing their navigation in indoor environments. So far, their approach has proved to perform remarkably well, transferring policies developed in simulation to real-world environments. In their future studies, the researchers plan to train their framework on images of more complex or crowded environments. In addition, they would like to broaden the training dataset they compiled, including a more diverse set of images.

More information: