The Hidden Humans Behind Meta’s AI Glasses: Kenyan Workers Reviewing Private Footage
Imagine wearing a pair of futuristic smart glasses.
You tap the frame, record a quick video, ask the glasses to describe what’s in front of you, or translate a sign in another language. Everything feels seamless, like the artificial intelligence inside the device understands the world instantly.
But behind that “instant” intelligence… There might be a human watching.
Thousands of kilometres away.
A recent investigation by the Swedish newspapers Svenska Dagbladet and Göteborgs‑Posten has revealed that workers in Kenya helping train Meta Platforms’s AI systems are regularly exposed to personal videos and images captured by users of the company’s Ray‑Ban Meta Smart Glasses.
And the findings are raising uncomfortable questions about privacy, global tech labour, and the hidden human workforce powering modern AI.
The AI Glasses That See What You See
Meta’s smart glasses, built in partnership with EssilorLuxottica, are marketed as a glimpse into the future of wearable AI.
The glasses can capture photos and videos hands-free, translate languages in real time, answer questions about what you’re looking at, describe surroundings using AI, and stream audio and take calls.
In theory, the device works like a personal AI assistant sitting on your face.
But for the AI to understand the world objects, environments, gestures, and context, it needs training.
And that training requires data.
Lots of it.
Where That Data Ends Up
According to the investigation published in February, some of the footage recorded by these smart glasses doesn’t stay on the device.
Instead, it travels across continents.
Videos and images are reportedly sent to contractors working for Sama, a company that provides AI training data for major tech firms.
There, data annotators, many of them young graduates, review and label the content so Meta’s systems can better understand what the cameras capture.
When AI Training Gets Uncomfortably Personal
Several Kenyan workers told reporters they regularly encounter extremely private moments in the videos they are asked to review.
Not just everyday scenes like people walking on the street or cooking in their kitchens.
But deeply personal situations.
Some workers reported seeing people changing clothes, individuals in bedrooms or bathrooms, sensitive financial details like bank cards visible on tables, couples in intimate situations and so on.
One worker reportedly described reviewing a video where someone removed the glasses and placed them on a bedside table while another person entered the room and undressed, unaware they were being recorded.
Another said they encountered footage where users appeared to be watching adult content or recording intimate moments.
According to workers interviewed, many of the people appearing in these videos likely had no idea they were being captured.
The Invisible Workforce Training AI
The story also highlights something many people don’t think about when they use AI tools. AI systems are often trained by humans.
Companies like Sama employ large numbers of workers to label images, moderate content, and train machine learning models.
The Blurry Line Between AI and Humans
For many consumers, AI products feel magical.
You ask a question.
The system answers instantly.
You point a camera.
The AI understands what it sees.
But investigations like this remind us that behind the algorithms are often thousands of human workers quietly helping machines learn.
Every labelled photo, every reviewed clip, every corrected description helps the system improve.
In other words:
The intelligence may be artificial.
But the labour behind it is very real.
The Bigger Question
As wearable AI devices become more common, the debate around privacy and data transparency is likely to grow.
Smart glasses, AI assistants, and always-on cameras could soon become part of everyday life.
But the question many people are now asking is simple:
Who else might be seeing what your AI sees?