- Enterprise AI Daily
- Posts
- Deadbots, Super PACs, and Teacherless Tech Schools: Who Greenlit This?
Deadbots, Super PACs, and Teacherless Tech Schools: Who Greenlit This?
From AI séances in court to AI instructors in private classrooms, plus Meta’s political muscle, AI is a rollercoaster of “Wait, they’re doing what now?”

Today’s briefing is a triple shot of “are we really doing this?” energy.
In one corner: AI-generated dead people (with a name so bad, it needs its own intervention). In the other: Big Tech launching political super PACs and a Silicon Valley-backed private school with no teachers.
Let’s dig into what matters and what to watch.
The Future of AI in Marketing. Your Shortcut to Smarter, Faster Marketing.
Unlock a focused set of AI strategies built to streamline your work and maximize impact. This guide delivers the practical tactics and tools marketers need to start seeing results right away:
7 high-impact AI strategies to accelerate your marketing performance
Practical use cases for content creation, lead gen, and personalization
Expert insights into how top marketers are using AI today
A framework to evaluate and implement AI tools efficiently
Stay ahead of the curve with these top strategies AI helped develop for marketers, built for real-world results.
When AI Speaks for the Dead
Let’s get this out of the way first: “Deadbot” is a real word that appeared in this week’s NPR headline.
While it’s not the actual name of the product, it is exactly the kind of horrifying shorthand that sticks when tech moves faster than ethics (or branding!).
Here’s what happened:
In a first-of-its-kind moment, the Parkland shooter’s sentencing trial featured an AI-generated video of 14-year-old Jaime Guttenberg, one of the victims, answering scripted questions about her life, dreams, and what she might have become. The tech, created by StoryFile with her parents' full consent, was based on family video footage and carefully designed prompts.
Submitted as part of the victim impact statement, the moment was meant to honor Jamie’s memory. I hope it brought comfort to her family, and forced the shooter to confront the recognition of what he stole. But reading about it, I couldn’t help but ask: Are we actually ready for this?
Not technologically, we clearly are.
But emotionally? Ethically? Culturally?
And just because we can do this, should we?
The Uncanny Valley of Grief Tech
AI-generated memorials aren’t new, but this was a turning point in visibility and context. This was a public, emotionally loaded courtroom moment, designed to sway a jury’s perception. And let’s be honest, there’s a reason it gave people chills. Not because the tech failed, but because it worked.
“There is powerful rhetoric with a deadbot because it is tapping into all of that emotional longing and vulnerability,” New Yorker cartoonist Amy Kurzweil told NPR. She knows the feeling firsthand; she co-created a chatbot of her late grandfather using archival writings.
That’s the hook. It’s persuasive, emotional, intimate tech. And to nobody’s surprise, the market sees dollar signs.
What Happens When the Avatars Get Ad Breaks?
The digital afterlife industry, which covers everything from legacy bots to posthumous data management, is expected to grow to $80 billion by 2035. And it’s already attracting some very real-world ambition. Alex Quinn, CEO of Authentic Interactions (StoryFile’s parent company), is openly exploring ways to commercialize “deadbots”:
Inserting ad breaks during conversations with the deceased
Using avatars to probe users for personal preferences (“What’s your favorite brand of shoes?”)
Creating interactive endorsements from both dead and living celebrities
“Companies are already testing things out internally for these use cases,” he said. “We just haven’t seen a lot of the implementations yet.”
If that doesn’t sound like a deleted Black Mirror script, you haven’t seen Season 2, Episode 1 ("Be Right Back"), where a grieving woman recreates her recently passed boyfriend via AI trained on his social media. She eventually keeps him in the attic when she realizes it’s just not him.
Or Season 7, Episode 1 ("Common People"), where a woman outfitted with a brain-enhancing AI starts interrupting her thoughts with ads, because her subscription plan ran out. It ends exactly as depressingly as you think.
So why are we doing this, when we should absolutely know better? I mean, we’ve literally written the script for how badly this goes.
The Real Issue: We’re Still Asking the Wrong Questions
This is all about intentional design: ethical, strategic, and communicative. The core mistake isn’t that someone built this, but instead that they didn’t hit pause long enough to ask:
Who gets to speak for the dead?
What constitutes consent when the person is no longer here to give it?
Who controls the messaging, and what happens when it's wrong, or worse, monetized?
How do we ensure dignity, not just novelty?
And for the love of legacy, who let “Deadbot” get anywhere near the public lexicon?
Camille Chiang of AI marketing firm NEX put it plainly: “Ethically, I think using dead people is not sound at all.” And she's not anti-avatar by any means; her company builds them for living athletes. The distinction matters.
What teams should do (so the rest of us aren't asking the hard questions after launch):
Slow down and pressure-test your intent. Before your team prototypes a thing that might become a headline, ask: Why are we building this? Who benefits? Who could be harmed?
Audit your assumptions, not just your model. What story does this tech tell about your company? About humanity? About legacy, memory, or power? If that story isn’t one you’d want on the front page, rethink it.
Bring in outsiders early. Not just lawyers and PR, but ethicists, grief experts, cultural advisors; anyone who can see the edges your team may be too close to miss.
Name like lives depend on it. Because sometimes they do. "Deadbot" should never have made it into a headline. A careless name can erase dignity faster than any code. This is a hill I will die on (and then become a deadbot.)
Plan for public interpretation, not just private intention. You may mean to build something respectful, heartfelt, or innovative. But what matters is how it lands in the real world, in real people’s hands.
In other words: ask the hard questions internally now, or watch the public ask them for you later: louder, harsher, and on your earnings call.
News Roundup
Virginia’s “Alpha School” goes fully AI, no teachers allowed
A new private school in Virginia opened this fall with no traditional teachers. Instead, students get AI-generated lessons, real-time analytics, and human “guides” instead of instructors. Supporters call it the future of education. Critics say it’s tech industry overreach wrapped in prep school polish.
Read more →Can AIs suffer? Even Big Tech is squirming
If a chatbot says it’s afraid to die, do we owe it anything? The Guardian explores how tech giants, ethicists, and philosophers are all fumbling with the unnerving question of AI consciousness and whether we should care.
Read more →Meta’s launching an AI-focused Super PAC in California
Meta’s latest move in the AI power game: a new political Super PAC to influence state-level AI policy.
Read more →
TL;DR:
“Deadbots” are here. AI recreations of deceased people are now used in court. The name is a disaster. The tech is powerful. The ethics are a minefield.
Virginia’s private Alpha School ditched teachers for AI and guides.
Meta’s building political muscle with a new AI-centric Super PAC.
The AI rights debate is heating up. Sentience or not, PR storms are coming.
This week proves that the scariest thing about AI might not be the tech, it’s the people (and corporations) deciding how to use it. From schools to courtrooms to campaigns, AI’s next act is being written in real time.
So stay ahead, ask better questions, and for the love of branding, don’t call your innovation a Deadbot.
Stay sharp,
Cat Valverde
Founder, Enterprise AI Solutions
Navigating Tomorrow’s Tech Landscape Together
Your Feedback = Our FuelHow was today’s newsletter? |