The Great Granny Hoax

How viral deepfakes signal a massive content authenticity problem while Big Tech scrambles for compute power

In partnership with

If you thought your biggest AI challenge was choosing between Claude and GPT-5, wait until you hear about the millions of people who just got fooled by fake grannies.

Today we’re diving into how millions of TikTok users got duped by AI-generated videos of “retirement home” residents dressed up in viral costumes that never existed. It’s funny, until it’s not. Because the same tech that made your feed believe a 90-year-old dressed as Megan Thee Stallion is the same tech enterprises are deploying to shape perception, shift policy, and maybe accidentally trigger a PR crisis.

Meanwhile, Apple’s whispering sweet nothings to Google’s $1.2 trillion-parameter Gemini model, Google is shooting data centers into orbit, and Microsoft’s latest licensing deal hints at a new arms race in enterprise AI content rights.

Let’s break it down.

Don’t get SaaD. Get Rippling.

Disconnected software creates what we call SaaD, or Software as a Disservice: wasted time, duplicate work, and stalled momentum. From onboarding checklists to reconciling expenses, SaaD slows every team down.

Rippling is the cure. With one system of record, you can update employee data once,and it syncs everywhere: payroll, benefits, expenses, devices, and apps.

Leaders gain real-time visibility. Teams regain lost hours. Employees get the seamless experience they deserve .

That’s why companies like Barry’s and Forterra turned to Rippling – to replace sprawl with speed and clarity.

It’s time to stop paying for inefficiency.

Don’t get SaaD. Get Rippling.

The Great Granny Hoax: Viral Deepfakes and the Future of AI-Generated Reality

On Halloween, TikTok lit up with videos claiming to show elderly residents of a retirement home dressed as Taylor Swift, Barbie, and viral internet memes. The only problem: the whole thing was AI-generated, and almost no one realized it. It took a full week before it was called out.

Created by an account @basincreekretirement, the video was eerily convincing, complete with staged group shots, faux interviews, and highly detailed backdrops. Millions of users liked, shared, and commented on the heartwarming creativity, except none of it was real. In fact, the entire account is AI-generated content.

But here's the most worrying piece: the telltale signs that security experts use to spot deepfakes are rapidly disappearing. The retirement home videos avoided most classic AI giveaways. No unnatural eye movements. No weird hand morphing. No uncanny valley facial expressions.

The few clues that remained were subtle: slightly too-perfect lighting consistency across different "rooms," backgrounds that repeated patterns in mathematically predictable ways, and skin textures that lacked the micro-imperfections of real human faces. One sharp-eyed viewer noticed that every "resident" blinked at exactly the same rate - 12 times per minute, like clockwork.

This is an early testament to how easy it is to blur the line between authenticity and artifice, and how quickly it can spread.

Why This Matters For All Of Us

It’s funny, for sure. But the authentication infrastructure we rely on - watermarks, metadata verification, blockchain certificates - wasn't built for this level of sophistication. When we can't tell real from fake, and neither can anyone else, this impacts customers, news sources, and the entire world in which we live and do business in.

This is especially important for leaders as a corporate cautionary tale.

  • Hyper-real synthetic content is here, and it works. It engages, persuades, and spreads fast.

  • No disclaimers, no context: The videos didn’t say they were AI-generated. Platforms didn’t label them. Most users still believe they’re real.

  • PR, marketing, and trust risks: The same techniques used to make a viral fake granny could be used to fabricate corporate leaders, employee misconduct, or fake endorsements.

  • Compliance and brand safety are on the line: Enterprises now face a triple-bind: how to use AI-generated content ethically, how to spot fakes quickly, and how to respond when the public narrative gets hijacked.

How To Spot an AI Video

Here’s a quick guide to help separate real from rendered and how to train yourself, your team, and your algorithms.

  1. Frame-by-Frame Glitches. Watch the eyes. Deepfakes often avoid blinking or blink oddly. Eye contact might feel too “locked in” or lifeless.

  2. Check the hands. AI still struggles with fingers. Watch for weird gestures, mismatched nails, or hands that morph between frames.

  3. Look for melting. Clothing, jewelry, and edges can warp or ripple strangely when the model can’t render them cleanly.

  4. Listen closely. Audio inconsistencies, robotic tone shifts, or unnatural cadence are common tells.

  5. Pause the video. Still frames often reveal wonky features, especially around the teeth, ears, or hairlines. See #1-3 above.

  6. Cross-reference. If it feels viral but unverified, check for metadata, reverse image searches, or video origin.

  7. How’s the resolution? Low-quality video, OR uncannily good video with extra clear, pore-less skin are both solid indicators it might be fake. Most people have solid cameras on their smart phones, so if it’s too low-quality, probably AI. But if it’s uncanny valley level perfect, also probably AI.

We’ve left the uncanny valley. From now on, if it looks too good (or too viral) to be true, assume you’ll need to verify it.

From Boring to Brilliant: Training Videos Made Simple

Say goodbye to dense, static documents. And say hello to captivating how-to videos for your team using Guidde.

1️⃣ Create in Minutes: Simplify complex tasks into step-by-step guides using AI.
2️⃣ Real-Time Updates: Keep training content fresh and accurate with instant revisions.
3️⃣ Global Accessibility: Share guides in any language effortlessly.

Make training more impactful and inclusive today.

The best part? The browser extension is 100% free.

Enterprise AI Group // Created with Midjourney

News Roundup

  1. Apple wants Siri to be smarter by borrowing Google’s brain.
    Apple’s reportedly testing Google’s Gemini 1.5 Pro model to power new Siri capabilities. That’s a 1.2 trillion parameter model embedded into Apple’s famously closed ecosystem. If this goes mainstream, enterprise integrations and data privacy contracts will get a lot more complicated.
    Read more →

  2. Google’s launching AI data centers in space.
    To handle escalating compute needs, Google wants to offload power-hungry AI tasks to orbital data farms. It’s wild sci-fi, sure—but also a reminder that hyperscalers are not constrained by traditional infrastructure. Enterprise buyers may soon be choosing between cloud and cosmos.
    Read more →

  3. People Inc. signs AI licensing deal with Microsoft.
    Microsoft is betting big on AI-native content creators, just as Google sees a drop in AI-related traffic. Watch for more strategic content licensing and tighter integrations between creator pipelines and enterprise copilots.
    Read more →

TL;DR:

  • Viral AI deepfakes are blurring the line between content and deception, enterprises need new safeguards.

  • Apple’s move toward Google’s Gemini model could reshape the AI model supply chain.

  • Google’s building data centers in space. Not a metaphor.

  • Microsoft’s bet on AI-native content creators could change the IP game for enterprise copilots.

  • The next PR disaster may not come from a rogue employee, but from a rogue model.

Truth is stranger than fiction, and now fiction looks more like truth than ever before.

Whether you're building copilots, deploying enterprise search, or just trying to keep your brand out of a deepfake scandal, one thing’s clear: governance, infrastructure, and reality checks need to evolve.

Stay sharp,

Cat Valverde
Founder, Enterprise AI Group
Navigating Tomorrow’s Tech Landscape Together