A team of researchers at New York University wondered if AI could learn like a baby. The AI model managed to match words to the objects they represent. Researchers relied on 61 hours of video from a helmet camera worn by a child in Australia. That child wore the camera off and on for one and a half years, from the time he was six months old until a little after his second birthday.
The camera captured the things Sam looked at and paid attention to during about 1% of his waking hours. To train the model, researchers used 600,000 video frames paired with the phrases that were spoken by the child’s parents or other people in the room when the image was captured 37,500 utterances in all.
More information:
https://www.technologyreview.com/2024/02/01/1087527/baby-ai-language-camera/