27 March 2019

Tarsier Goggles

Tarsier Goggles, developed at Dartmouth College, simulates a tarsier's vision and illustrates the adaptive advantage of this animal's oversized eyes. The open-access software features three virtual learning environments: Matrix, Labyrinth and Bornean Rainforest. Bornean tarsiers have protanopia, a form of red-green colorblindness. In the virtual Bornean Rainforest, users can move through the forest, leaping and clinging to trees in a dark, maze-like space that is practically opaque under human visual conditions but navigable as a tarsier, demonstrating the advantages of tarsier visual sensitivity.


Tarsier Goggles was built in Unity3D with SteamVR for the HTC VivePro, and was coded in C#. The Virtual Reality Toolkit was used to create functionalities such as teleportation. For many of the visual effects, Unity's built-in post processing stack was utilized, and the assets were built in Maya. All the visual assets and experience was coded from scratch by the DALI team based on the lab's collaborative, human-centered design approach. Tarsier Goggles illustrates the possibilities for how virtual reality can be applied to science education by providing students with a fun, interactive way to explore complex concepts.

More information:

26 March 2019

Quantum Simulator

One example of a complex quantum system is that of magnets placed at really low temperatures. Close to absolute zero (-273.15 degrees Celsius), magnetic materials may undergo what is known as a quantum phase transition. Like a conventional phase transition (e.g. ice melting into water, or water evaporating into steam), the system still switches between two states, except that close to the transition point the system manifests quantum entanglement, the most profound feature predicted by quantum mechanics. Studying this phenomenon in real materials is an astoundingly challenging task for experimental physicists.


But physicists at EPFL have now come up with a quantum simulator that promises to solve the problem. The simulator is a simple photonic device that can easily be built and run with current experimental techniques. But more importantly, it can simulate the complex behavior of real, interacting magnets at very low temperatures. The simulator may be built using superconducting circuits, the same technological platform used in modern quantum computers. The circuits are coupled to laser fields in such a way that it causes an effective interaction among light particles (photons).
More information:

24 March 2019

Robotic 'Gray Goo'

The concept of 'gray goo', a robot comprised of billions of nanoparticles, has fascinated science fiction fans for decades. But most researchers have dismissed it as just a wild theory. Current robots are usually self-contained entities made of interdependent subcomponents, each with a specific function. If one part fails, the robot stops working. In robotic swarms, each robot is an independently functioning machine. In a new study, researchers at Columbia Engineering and MIT Computer Science & Artificial Intelligence Lab (CSAIL), demonstrate for the first time a way to make a robot composed of many loosely coupled components, or particles. 


Unlike swarm or modular robots, each component is simple, and has no individual address or identity. In their system, which the researchers call a particle robot, each particle can perform only uniform volumetric oscillations (slightly expanding and contracting), but cannot move independently. The team, discovered that when they grouped thousands of these particles together in a sticky cluster and made them oscillate in reaction to a light source, the entire particle robot slowly began to move forward, towards the light. The robot has no single point of failure and no centralized control.

More information:

23 March 2019

VR Could Improve Your Balance

To a high degree, vision affects our ability to keep our balance, and balance affects our ability to move around. People with long-term dizziness sometimes rely a lot on their vision and do not use the very quick and effective balance system provided by sensory information from joints and muscles. VR could become an efficient tool for older people with balance problems or for rehabilitation following injuries or illness that affect balance and movement. In a new study, researchers have studied how the human balance system is affected by watching VR videos. Twenty healthy women and men took part in the study, in which they watched a VR simulation of a roller-coaster ride while standing on a platform which registered their postural stability. 


The researchers investigated how the participants' balance system was affected when visual information was disrupted by the experience of being in a VR environment which gave them a strong sensation of being in movement. The study shows that the human balance system can very quickly cease to rely on vision and use other senses instead, such as sensory information from the feet, joints and muscles to increase postural stability. Differences also emerged in how men and women are affected by watching a VR video. More women had difficulty maintaining their balance in a VR environment and they generally needed more practice before they learnt to use their other senses to increase postural stability.

More information:

17 March 2019

Facebook Reality Labs Creates Realistic Avatars

Facebook Reality Labs (FRL) believes AR and VR will be the primary way people work, play, and connect in the future. Dubbed ‘Codec Avatars’, the Pittsburgh office is using what they call groundbreaking 3D capture technology and AI systems to generate lifelike virtual avatars that could provide the basis of a quick and easy personal avatar creator of the future. The company says at this point these sorts of real-time, photorealistic avatars require quite the gear to achieve. The lab’s two capture studios—one for the face, and one for the body—are admittedly both large and impractical at this point. 


The ultimate goal however is to achieve all of this through lightweight headsets, although FRL Pittsburgh currently uses its own prototype Head Mounted Capture systems (HMCs) equipped with cameras, accelerometers, gyroscopes, magnetometers, infrared lighting, and microphones to capture the full range of human expression. Using a small group of participants, the lab captures 1GB of data per second in effort to create a database of physical traits. In the future, the hope is consumers will be able to create their own avatars without a capture studio and without much data either.

More information: