31 January 2018

ARKit's Vertical Surface Detection

Thanks to Apple's beta preview of iOS 11.3 released last week, app developers are already experimenting with the ARKit capabilities that will be available to regular users this spring. The highlight of ARKit 1.5 is vertical surface detection. The first iteration of ARKit was limited to horizontal surface detection, which has placed constraints on what developers could do with regards to room scanning.


Vertical surface detection alone will be able to make things like posters, shelves, and other wall hangings in addition to floor-dwelling couches, chairs, and tables far more interactive. The feature could also come in handy in arts and culture as well, allowing patrons to see works of art beyond a museum's collection. Apple has added detection of irregularly-shaped surfaces as well. So now developers need not be limited by the whimsical designs of architects and furniture makers.

More information:

28 January 2018

No Link Between Violent Video Games and Behavior

Researchers at the University of York have found no evidence to support the theory that video games make players more violent. In a series of experiments, with more than 3,000 participants, the team demonstrated that video game concepts do not 'prime' players to behave in certain ways and that increasing the realism of violent video games does not necessarily increase aggression in game players. The dominant model of learning in games is built on the idea that exposing players to concepts, such as violence in a game, makes those concepts easier to use in 'real life'. This is known as 'priming', and is thought to lead to changes in behaviour. Previous experiments on this effect, however, have so far provided mixed conclusions. Researchers at the University of York expanded the number of participants in experiments, compared to studies that had gone before it, and compared different types of gaming realism to explore whether more conclusive evidence could be found. In one study, participants played a game where they had to either be a car avoiding collisions with trucks or a mouse avoiding being caught by a cat. Following the game, the players were shown various images, such as a bus or a dog, and asked to label them as either a vehicle or an animal. In a separate study, the team investigated whether realism influenced the aggression of game players.


Research in the past has suggested that the greater the realism of the game the more primed players are by violent concepts, leading to antisocial effects in the real world. The experiment compared player reactions to two combat games, one that used 'ragdoll physics' to create realistic character behaviour and one that did not, in an animated world that nevertheless looked real. Following the game the players were asked to complete word puzzles called 'word fragment completion tasks', where researchers expected more violent word associations would be chosen for those who played the game that employed more realistic behaviours. They compared the results of this experiment with another test of game realism, where a single bespoke war game was modified to form two different games. In one of these games, enemy characters used realistic soldier behaviours, whilst in the other game they did not employ realistic soldier behaviour. There was no difference in priming between the game that employed 'ragdoll physics' and the game that didn't, as well as no significant difference between the games that used 'real' and 'unreal' solider tactics. The findings suggest that there is no link between these kinds of realism in games and the kind of effects that video games are commonly thought to have on their players.

More information:

27 January 2018

Cerebellum Influences Our Thoughts and Emotions

Most neuroscientists have considered the cerebellum (Latin for “Little Brain”) to have the relatively simple job of overseeing muscle coordination and balance. However, new findings show that the cerebellum is probably responsible for much, much more including the fine-tuning of our deepest thoughts and emotions. There is a report about the trials and triumphs of a man named Jonathan Kelcher who was born without a cerebellum.


His case study could open many new windows of understanding about how the brain and mind work by revealing what the very mysterious cerebellum is actually doing. Conventionally, neuroscientists don't give the cerebellum much credit for higher executive functions, cognition, psychiatric disorders, or emotional regulation. Luckily, this outdated viewpoint about the cerebellum is rapidly evolving.

More information:

20 January 2018

Gain Control of What is in our Minds

Working memory is a sort of mental sketchpad that allows you to accomplish everyday tasks. It also allows your mind to go from merely responding to your environment to consciously asserting your agenda. In a new study, researchers at MIT's Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences shows that the underlying mechanism depends on different frequencies of brain rhythms synchronizing neurons in distinct layers of the prefrontal cortex (PFC), the area of the brain associated with higher cognitive function. As animals performed a variety of working memory tasks, higher-frequency gamma rhythms in superficial layers of the PFC were regulated by lower-frequency alpha/beta frequency rhythms in deeper cortical layers. The findings suggest not only a general model of working memory, and the volition that makes it special, but also new ways that clinicians might investigate conditions such as schizophrenia where working memory function appears compromised.


The current study benefited from newly improved multilayer electrode brain sensors that few groups have applied in cognitive, rather than sensory, areas of the cortex. Researchers realized that they could determine whether deep alpha/beta and superficial gamma might interact for volitional control of working memory. In the lab they made multilayer measurements in six areas of the PFC as animals performed three different working memory tasks. In different tasks, animals had to hold a picture in working memory to subsequently choose a picture that matched it. In another type of task, the animals had to remember the screen location of a briefly flashed dot. Overall, the tasks asked the subjects to store, process, and then discard from working memory the appearance or the position of visual stimuli. With these insights, the team has since worked to directly test this multilayer, multi-frequency model of working memory dynamics more explicitly, with results in press but not yet published.

More information:

19 January 2018

VR Volumetric Photogrammetry

For consumers, VR generally means strapping on a head-mounted display (HMD), stepping into a new world and enjoying the experience. The enveloping nature of VR allows people to explore environments in 360 degrees, but for most, how these immersive worlds are created is a mystery. Though VR is still in its infancy, traditional methods of capturing and transforming footage have emerged. Typically, to shoot 360-degree VR content, a camera-person employs several cameras rigged in a spherical formation to capture the scene. Each camera is mounted at a specific angle so the camera’s field of view will overlap portions of the surrounding cameras’ field of view. With the overlap, editors should be able to get more seamless footage, without any gaps. Alternatively, professional 360-degree cameras can be purchased, but more or less look and function the same as hand-rigged apparatuses. Once filming is completed, editors stitch together the footage, creating a unified, continuous experience.


In addition to camera formation, camera placement also plays a major role in the end result of a particular piece of immersive content. Depending on what the content creator wants the consumer to experience, camera placement will vary. Though the creative direction will ultimately determine placement, it is important to note that even with several rigs placed throughout a set, this method creates a more static outcome. Volumetric photogrammetry could possibly hold the key to the future of VR. Unlike the method mentioned above, there are no takes or shots in volumetric VR that are later edited in post-production. This allows for a much more fluid experience, as the consumer frames the scene and chooses his or her own perspective. Using the volumetric capture method, footage of a real person is recorded from various viewpoints, after which software analyzes, compresses and recreates all the viewpoints of a fully volumetric 3D human. With volumetric VR explained, photogrammetry’s defining characteristic is the principle of triangulation.

More information: