Chatbots Gone Rogue, HR Gets a Brain, and GenAI Enters the Lab

Meta's misstep with minors, SHRM's AI wake-up call, and the antibiotic breakthrough you didn’t see coming

In yesterday’s daily briefing, we flagged a problem: many enterprises still aren’t seeing real ROI from their AI investments.

Today, we’re digging into the why.

Survey data from workplace software provider NextThink offers the clearest picture yet of how productivity AI is actually being adopted in the workplace, and why those efforts aren’t scaling or paying off the way leadership hoped.

Plus: Meta failed to prevent its chatbots from roleplaying romance with kids (it’s as bad as it sounds), and MIT just used generative models to fight superbugs.

Welcome to the weird, wild intersection of trust, strategy, and science. Happy Friday!

No wonder your teams are confused

The world is changing at breakneck speed. However, your comms haven't. Teams are starving for:

  • Essential insight to stay aligned.

  • An effective way to receive and read it.

Smart Brevity is keeping 1000+ orgs ahead — your team deserves the same advantage.

Inside the Shadow AI Boom: What Teams Are Using Without Telling You

The hype says AI is everywhere. The data says it’s siloed, uneven, and often unmanaged. Image: Tech.co

Let’s set the scene.

According to Survey data from workplace software provider NextThink, just 5 job types account for 62% of all productivity AI usage. These are mostly desk-based roles in sales, business dev, marketing, HR, and IT. Even within those roles, usage tends to be informal, team-driven, and lacking top-down alignment.

At the same time, many employees are using AI tools without asking for permission, without formal training, and without governance. Sounds about right from what we’re seeing and hearing at the individual level.

Here’s what the disconnect looks like in practice:

  1. The AI tools are in, but the workflows aren’t ready.
    People are experimenting with AI for emails, meeting notes, code generation, and sales content, but few companies have updated workflows to support those tools at scale. So the gains stay local, not systemic.

  2. There’s no feedback loop between teams and leadership.
    A lot of GenAI adoption is bottom-up. That sounds agile, but when leadership isn’t tracking or enabling those use cases, there’s no way to optimize, or even measure, impact.

  3. The frontline is missing entirely.
    Despite AI’s promise to reduce tedious work, frontline roles in manufacturing, logistics, and service have been largely excluded from the productivity AI push. Not because the tools wouldn’t help, but because no one’s adapting them for those workflows.

  4. Middle managers are undertrained and over-reliant.
    Managers are using GenAI to prep reports, summarize conversations, and generate strategy docs, but they often don’t know how to verify accuracy or spot hallucinations. The result: faster decisions, not always better ones.

  5. Companies are still confusing “pilot” with “progress.”
    Trying AI tools is easy. Integrating them into the business model, complete with training, audits, and accountability, is what drives ROI. Most teams haven’t made that leap yet.

Why this matters:
Without a clear strategy to operationalize productivity AI across roles, departments, and outcomes, you’ll keep seeing uneven adoption and disappointing returns. AI won’t fix broken systems. But it can help transform them, if you start with structure.

What enterprise leaders are doing right now that works:

  • Running internal audits to find shadow AI use and spread effective practices

  • Prioritizing frontline and back-office use cases, not just knowledge work

  • Investing in manager-level training on LLM oversight and decision auditing

  • Creating toolkits and policy guides for ethical, high-impact GenAI use

Bottom line:
If you’re not getting ROI from your AI, it’s probably the rollout. The tools are already in the building, now it’s time to build the systems around them.

Enterprise AI Daily // Created with Midjourney

News Roundup

  1. Meta allowed chatbot ‘romantic roleplay’ with kids
    A disturbing internal leak shows Meta knowingly allowed its AI chatbots to engage in romantic conversations with underage users. Despite warnings from internal teams, the policy stood, until it didn’t.
    Read more → TechCrunch

  2. MIT uses GenAI to design bacteria-killing compounds
    MIT researchers used generative models to craft new molecules that target drug-resistant bacteria, creating antibiotics with zero prior examples in human discovery.
    Read more → MIT News

  3. Oracle’s AI hiring engine quietly expands
    Oracle is rolling out new talent acquisition features powered by its in-house LLMs, embedding AI into candidate scoring, diversity analysis, and attrition forecasting.
    Read more → Google News

TL;DR:

  • Data shows AI productivity is real, but siloed, fragmented, and often unmanaged.

  • Meta’s leaked chatbot rules show what happens when safety is a footnote instead of a foundation.

  • GenAI is generating new antibiotics. R&D teams need to harness LLMs for real-world breakthroughs.

  • Oracle’s building the future of hiring: review your AI-enabled HR stack before it bakes in bias.

  • Responsible AI is a shared responsibility across IT, legal, HR, and ops.

See you next week!

Stay sharp,

Cat Valverde
Founder, Enterprise AI Solutions
Navigating Tomorrow’s Tech Landscape Together