Predictive Demize
The Feed That Sees You Before You See Yourself
It begins quietly.
A young Black man wakes up, reaches for his phone, and scrolls. The images arrive before thought does—muscular bodies, violence clipped into ten-second loops, luxury fantasies, prison aesthetics, hyper-sexualized archetypes. It feels random. It isn’t.
The algorithm has already decided who he is.
The Architecture of Seeing
Modern artificial intelligence systems are trained on vast oceans of images scraped from the internet—photos, memes, stock libraries, surveillance data. But the internet is not neutral. It is an archive of human bias, compressed into pixels and tags.
Researchers have found that AI image systems routinely misrepresent or underrepresent people of color, often defaulting to whiteness or distorting racial identity entirely . Even when they do generate Black faces, those outputs are shaped by patterns already embedded in the data: stereotypes, repetition, exaggeration.
In some cases, the bias is measurable. Facial recognition systems have been shown to be 10 to 100 times more likely to misidentify Black faces compared to white ones .
In others, it is subtler—less about error, more about narrative.
AI doesn’t just fail to see Black people clearly. It learns to see them in very specific ways.
The Loop
Algorithms are not static. They learn from engagement.
If a user watches certain content longer—whether it’s music videos, crime footage, or hyper-masculine imagery—the system adapts. It feeds more of what holds attention. Over time, this becomes a closed loop: behavior trains the algorithm, and the algorithm trains behavior.
Experts describe this as a feedback cycle where biased outputs become new training inputs, reinforcing themselves indefinitely .
For Black male users, this loop can skew toward extremes. Content that performs well—emotionally intense, visually striking, often stereotypical—gets amplified. Nuanced portrayals struggle to compete.
The result is not a single bias, but a narrowing of possibility.
The Image Economy
In the attention economy, images are currency. But not all images carry equal weight.
Studies of generative AI have shown that systems often over-represent light-skinned individuals while distorting or flattening the diversity of darker-skinned people . In some experiments, images of Black individuals were rendered with less variation, more homogeneity, as if individuality itself had been compressed into a template.
That template travels.
It appears in recommendation feeds, in ad targeting, in synthetic media. It shapes what gets seen—and what gets imagined.
And imagination matters. Research suggests that exposure to biased or non-inclusive images can increase people’s stereotypes, while more inclusive imagery can reduce them .
In other words: the feed doesn’t just reflect reality. It edits it.
The Underground Layer
There is a deeper layer to this system—one rarely visible to the user.
Behind every recommendation is a chain of decisions: dataset curation, labeling practices, optimization goals. Engineers choose what success looks like—engagement, retention, clicks. The algorithm follows.
But engagement is not neutral. It gravitates toward what is emotionally charged, culturally coded, historically loaded.
And so, without explicit intent, systems can drift toward reinforcing the most recognizable—and often the most damaging—representations.
As one body of research puts it, algorithms can reproduce and support the same stereotypes that exist in society .
The difference is scale.
A stereotype once limited to a neighborhood, a film, a conversation can now be distributed to millions in seconds, personalized, optimized, and repeated until it feels like truth.
The Cost of Compression
What is lost in this process is not just accuracy, but range.
When the algorithm learns a narrow definition of Black masculinity—whether it’s aggression, hyper-performance, or spectacle—it begins to filter out everything else. Complexity becomes inefficiency. Subtlety becomes invisible.
The feed fills with versions of the same image.
Not because those images are the only ones that exist—but because they are the ones the system has learned to reward.
A Mirror That Distorts
Technology companies often describe their systems as mirrors of society. But a mirror that enlarges some features and erases others is not neutral. It is a lens.
And like any lens, it shapes what we see—and what we believe is there.
For the young man scrolling in the morning, this shaping is almost imperceptible. There is no moment when the algorithm announces itself. No disclaimer that says: This is a curated version of your identity.
There is only the feed.
And the quiet suggestion, repeated thousands of times:
This is you.
The Unfinished Question
Can the system be changed?
Researchers are trying—building more diverse datasets, auditing outputs, designing fairness constraints. Some progress has been made. But the core tension remains: systems optimized for attention will always drift toward what captures it most efficiently.
And attention, in a culture shaped by history, is rarely free of bias.
The question, then, is not just technical. It is cultural.
What happens when identity is mediated by machines trained on the past?
And who gets to decide what the future looks like—when the algorithm is already watching, already learning, already choosing what comes next?
The phone screen dims. The scroll stops. For a moment, the images disappear.
But the system does not.
It is still there—waiting, predicting, deciding—long before the next swipe.