A new deep learning algorithm can
generate high-resolution, photorealistic images of people (faces, hair,
outfits, and all) from scratch. The AI-generated models are the most realistic
we’ve encountered, and the tech will soon be licensed out to clothing companies
and advertising agencies interested in whipping up photogenic models without
paying for lights or a catering budget. At the same time, similar algorithms
could be misused to undermine public trust in digital media. The algorithm was
developed by DataGrid, a tech company housed on the campus of Japan’s Kyoto
University, according to a press release.
The new algorithm is a Generative
Adversarial Network (GAN) and is the kind of AI typically used to churn out new
imitations of something that exists in the real world, whether they be video
game levels or images that look like hand-drawn caricatures. DataGrid’s system is
posing the AI models in front of a nondescript white background and shining
realistic-looking light down on them. Each time scientists build a new
algorithm that can generate realistic images or deepfakes that are
indistinguishable from real photos, it seems like a new warning that
AI-generated media could be readily misused to create manipulative propaganda.
More information: