26 September 2020

AI Getting Smarter

Of all the AI models in the world, OpenAI’s GPT-3 has most captured the public’s imagination. It can spew poems, short stories, and songs with little prompting, and has been demonstrated to fool people into thinking its outputs were written by a human. But its eloquence is more of a parlor trick, not to be confused with real intelligence. Nonetheless, researchers believe that the techniques used to create GPT-3 could contain the secret to more advanced AI. GPT-3 trained on an enormous amount of text data. What if the same methods were trained on both text and images?

Now new research from the Allen Institute for Artificial Intelligence, AI2, has taken this idea to the next level. The researchers have developed a new text-and-image model, otherwise known as a visual-language model, that can generate images given a caption. The images look unsettling and freakish, nothing like the hyper realistic deep fakes generated by GANs, but they might demonstrate a promising new direction for achieving more generalizable intelligence, and perhaps smarter robots as well.

More information:


23 September 2020

Visual Part of Brain Keeps Hidden Thoughts

A recent study led by UNSW psychologists has mapped what happens in the brain when a person tries to suppress a thought. The neuroscientists managed to decode the complex brain activity using functional brain imaging (fMRI) and an imaging algorithm. The findings suggest that even when a person succeeds in ignoring a thought, like the pink elephant, it can still exist in another part of the brain, without them being aware of it. This suggests that mental images can form even when we are trying to stop them. Participants were given a written prompt (either green broccoli or a red apple) and challenged not to think of it. To make this task even harder, they were asked to not replace the image with another thought. After 12 seconds, participants confirmed whether they were able to successfully suppress the image or if the thought suppression failed. 

Eight people were confident they had successfully suppressed the images, but their brain scans told a different story. Brain neurons fired and then pulled oxygen into the blood each time a thought took place. This movement of oxygen, which was measured by the fMRI machine, created spatial patterns in the brain. The researchers decoded these spatial patterns using an algorithm called multivoxel pattern analysis (MVPA). The algorithm could distinguish brain patterns caused by the vegetable/fruit prompts. Eight study participants were confident they had successfully suppressed the images of the red apple or green broccoli, but their brain scans suggested otherwise. The scans showed that participants used the left side of their brains to come up with the thought, and the right side to try and suppress it.

More information:


22 September 2020

Facebook’s Project Aria

Facebook is continuing its push toward delivering AR glasses and it is showing some of its development out in the open. Project Aria is a sensor-rich pair of glasses which the company will use to train its AR perception systems and asses public perception of the technology. Facebook is keenly aware of the backlash that faced Glass, Google’s early attempt at consumer smart glasses. The privacy implications that came with people walking around wearing a camera on their head were not lost on the public, some of which took to calling Glass users ‘Glassholes’. By its nature, AR requires heaps of sensors to work. Cameras facing out to see the world, cameras facing in to see where your eyes are pointed, accelerometers to determine orientation, microphones to hear you speak, and plenty more.

It is like Google Glass times ten. Project Aria is an AR headset prototype that Facebook is using for two things: gathering data for AI training and assessing the public’s perception & concerns of the technology. As far as we know, Aria does not have any displays, but it does have a full suite of the kind of sensors that will be used in a complete AR headset. It is basically a pair of sensor-rich glasses that’s designed to soak up everything that it can see and hear. The data collected will be used to train AR perception systems that will allow AR glasses to understand the world around them to provide useful information to the user. Another goal of Aria is to test the waters with public perception and uncover privacy and ethical obstacles.

More information: