Living in a Synthetic World
(A reflection on AI, truth, and the challenge of staying human)
It starts with a text.
A video of your daughter at school… only it’s not real. The voice sounds like her. The smile is exactly right. But she never recorded it. The sender? Unknown.
You check your feed—your favorite news anchor is reporting breaking news. The story is inflammatory. The delivery is eerily flawless. But her earrings don’t match what she wore yesterday. Deepfake. Again.
By noon, your AI assistant has booked your meetings, replied to Slack, recorded a podcast in your voice, and drafted an apology letter—for something you never actually said.
By evening, your partner sends a heartwarming video. But they never hit record. The algorithm just thought you’d like to hear it.
Everything looks real. Everything feels real. But nothing is real.
Welcome to the synthetic world.
The Rise of Artificial Everything
We are standing on the edge of a profound transformation—one accelerated by powerful generative AI tools like Veo 3, Kling, Runway, Sora, and beyond. We can now generate lifelike videos, clone voices, mimic facial expressions, and produce entire synthetic realities in minutes.
The progress is staggering. The creative potential? Infinite. And yet, as someone who has spent decades designing the future—I’m deeply unsettled.
Because while the technology surges forward, the ethics, protections, and accountability mechanisms are lagging dangerously behind.
What Happens When You Can't Trust Your Eyes—or Your Ears?
We’ve already seen glimpses of what’s possible:
Scam calls using cloned voices of loved ones.
Fake political ads generated with AI to sway public opinion.
Synthetic revenge content spreading faster than it can be removed.
Corporate testimonials and influencer endorsements that were never spoken by a human.
What happens when every image, video, and voice can be fabricated—and indistinguishably so?
In a synthetic world, truth becomes optional.
The Real Crisis Isn’t Technology. It’s Trust.
We’ve built machines that can mimic us. But we haven’t built the systems to protect us.
Several major players are actively developing watermarking and provenance tools to identify AI-generated content, and regulators are beginning to respond. Google’s DeepMind introduced SynthID, which embeds invisible watermarks into AI-generated media. OpenAI and Meta have also pledged to implement watermarking systems, though OpenAI’s tool for ChatGPT remains unreleased, and Meta’s approach is still in development. In March 2025, China adopted the Measures for the Identification of AI-Generated Synthetic Content, requiring both visible and invisible watermarks to enhance transparency—set to take effect September 1, 2025
Yet without globally adopted standards or broad enforcement, synthetic content continues to spread unchecked—and remains virtually indistinguishable from reality. In this environment, there are still no guarantees that what you’re seeing is actually…real.
We are heading into an age where AI-generated content will be indistinguishable from human-made content—and weaponizable at scale. This is not a science fiction scenario. This is now.
So What Do We Do?
As technologists, designers, creators, and citizens—we can’t sit this one out. We must design AI not just for capability, but for conscience.
Here’s where we start:
Mandate traceability. Build watermarks and provenance tracking into every generative system.
Educate the public. Media literacy must be part of basic digital fluency.
Collaborate on standards. Tech companies, policymakers, and creators need to co-create ethical boundaries.
Design for discernment. Build interfaces that help people question, verify, and contextualize—rather than blindly consume.
Humanity Is Not Optional
At the heart of this conversation isn’t just content. It’s consequence.
If we allow a fully synthetic world to flourish unchecked, we risk flattening human experience into simulation, losing something essential, sacred, and irreplaceable.
So here’s my challenge to all of us:
Let’s build tools that amplify humanity, not replace it.
Let’s design systems that earn trust, not exploit it.
Let’s lead with intention, because the future won't just be artificial—it will be whatever we allow it to become.
The tools are not the enemy. But the absence of intention might be.
In this synthetic world, we must fight for what’s real. And that starts with us.
🔗 To join the conversation or learn more about ethical design in the age of AI, visit aidesignforgood.com