Skip to main content
AI & Strategy

The AI Brief: What Every Business Should Know Right Now

April 11, 2026 | 8 min read | By Joe Malott
Retro geometric goose dressed for travel with a suitcase and polka-dot bow tie, representing a well-equipped journey into AI tools and strategy

The AI news cycle moves fast enough that keeping up with it has become a job in itself — which is exactly the kind of problem AI is supposed to solve. So here's the deal: we've packed the suitcase. The most important things that have happened in the last few months, what they actually mean for how your business operates, and the security stuff you can't afford to keep ignoring. Let's go.

The Numbers That Should End the "We're Still Evaluating AI" Conversation

If your organization is still in a holding pattern on AI adoption, here's a quick briefing on what's happening around you while you wait.

Anthropic's annual recurring revenue has crossed nearly $7 billion and is reportedly targeting $15 billion by end of 2026. Their Model Context Protocol — the open standard for connecting AI to external tools and data — hit 97 million installs in March alone. OpenAI crossed one million business customers, with 92% of the Fortune 500 actively using ChatGPT. Enterprise seat growth: 9x year-over-year. Weekly messages: up 8x since late 2024.

These aren't "the tech sector is excited" numbers. These are "your competitors are already using this" numbers. The evaluation phase ended. The adoption phase is well underway. The question has shifted from whether to how well.

92% of the Fortune 500 is already using AI in their workflows. The competitive moat isn't in adopting AI anymore — it's in using it better than the person next to you.

Claude Just Got a Million-Token Context Window. Here's What That Actually Means.

Claude Opus 4.6 launched in February with a one-million-token context window. If that number doesn't mean much to you, here's a translation: one million tokens is roughly 750,000 words. That's about four full novels. Or an entire quarter's worth of customer support tickets. Or every piece of marketing content your team has ever produced. Or your three closest competitors' websites — simultaneously.

This is not a minor capability upgrade. The fundamental limitation of AI in business contexts has always been that it could only look at a small slice of your data at a time — a document here, a report there, a few emails at once. The 1M token context window removes that constraint in most practical business applications. You can now hand Claude your entire competitive research file and ask it to synthesize the three things your market does better than you. You can feed it a year of customer feedback and ask it to surface the five complaints that show up in a hundred different ways. You can give it your last twelve months of campaign performance data and ask it to build a hypothesis for why Q3 always underperforms.

The Managed Agents Beta Changes What "Using Claude" Means

Alongside the context window upgrade, Anthropic launched a Managed Agents public beta — a fully managed harness for running Claude as an autonomous agent. The pitch: go from prototype to production in days, with built-in orchestration, secure sandboxing, session tracing, and the infrastructure handled for you. This matters because the difference between "Claude as a chat tool" and "Claude as an agent" is the difference between asking a smart colleague a question and giving that colleague a project with a deadline.

Agents don't just respond. They plan, execute, check their work, and loop back when they hit a snag. For business workflows — competitive monitoring, data pipeline summaries, automated reporting, content production at scale — this represents a qualitative shift in what's buildable without a large engineering team.

Abstract data streams and glowing neural network visualization representing AI processing

Claude Cowork Goes Enterprise-Grade

Also shipping in April: Claude Cowork moved from preview to general availability with a meaningful set of enterprise features — role-based access controls, granular tool restrictions, team budgets, and usage analytics. For teams that have been nervous about deploying Claude internally at scale, these are the guardrails that make it viable. You can now define exactly which tools each team can access, set spending limits by department, and actually see how your organization is using the product. That's the difference between an experiment and a managed capability.

One million tokens is ~750,000 words. You can hand Claude your entire competitive research file, a year of customer feedback, and twelve months of campaign data — at the same time.

Agentic AI Just Landed in Your Marketing Stack — Whether You Noticed or Not

Here's something that's easy to miss in the broader AI noise: HubSpot and Salesforce are both rolling out AI systems that plan, test, and adapt campaigns with minimal human instruction. Not AI that helps you write an email. AI that decides which segments to target, builds the variants, runs the test, reads the results, and adjusts — and then surfaces a recommendation for the next cycle.

This is what "agentic AI in marketing" actually means in practice. It's not a chatbot bolted onto your CRM. It's a loop: strategy → execution → measurement → optimization → back to strategy, with AI doing most of the repetitive cognitive work in the middle.

The Numbers Are Already There

Pinterest's Performance+ campaigns — their AI-optimized ad product — are delivering over 20% reductions in cost-per-acquisition compared to traditional campaign setups. HubSpot's refined AI prediction models are showing 82% improvement lift in email performance outcomes. These aren't projected gains from a pitch deck. They're reported results from live campaigns running right now.

The practical implication for marketing teams is uncomfortable but important: the people who will thrive in this environment aren't the ones who are best at manual execution. They're the ones who are best at directing AI execution — defining the strategy, setting the guardrails, reading the outputs critically, and knowing when the machine got it wrong. That's a different skill set than what most teams have been optimizing for, and it's worth thinking about intentionally.

The Security Brief (Please Don't Skip This One)

Okay, we're going here. Not because it's fun — it's not — but because the gap between how confidently most businesses are deploying AI and how carefully they're thinking about security is genuinely alarming.

Your Employees Are Already Doing It

77% of employees paste company data into chatbots. Of those, 22% of those instances include confidential personal or financial information. That's not a hypothetical risk profile. That's what's happening in your organization right now, whether you have a policy about it or not. The people doing it aren't being malicious — they're trying to work faster, and AI makes them faster. But "works faster" and "handles sensitive data safely" are two things that require deliberate coordination, not happy accident.

Prompt Injection Is the Attack Nobody Takes Seriously Until It's Too Late

OWASP ranked prompt injection as the #1 LLM security vulnerability in their AI security framework, and 2025 saw a 340% year-over-year increase in documented attacks. The mechanism is simple and the impact is serious: an attacker embeds instructions in content that your AI system processes — a support ticket, a document, a webpage — and those instructions hijack the AI's behavior. In a customer service context, that might mean extracting data from other tickets. In a marketing context, it might mean manipulating what content gets surfaced or sent.

If you're building AI agents that process user-submitted content — and increasingly that's the whole point of these systems — prompt injection is not a theoretical concern. It's an active attack surface.

Laptop screen showing code and security monitoring interface in a dim workspace

Quick Security Checklist for AI Deployments

This doesn't have to be complicated. The basics cover most of the risk:

  • Data classification before AI integration. Know what's public, internal, confidential, and regulated. Set clear rules about which categories can enter which tools — and enforce them.
  • Enterprise tier for anything business-sensitive. Consumer AI tools have terms that may allow your inputs to train future models. Enterprise agreements don't. Know which you're using.
  • Input validation for AI agents. Any agent that processes external content needs guardrails around what that content can instruct it to do. This is an architectural concern, not just a policy one.
  • Usage visibility. If you can't see what your team is feeding into AI systems, you can't manage the risk. Claude Cowork's new usage analytics exist for exactly this reason.
  • Human checkpoints on consequential outputs. AI drafts the email, sends the campaign, generates the report — but somewhere in that chain, a human needs to have eyes on anything that represents the business to the outside world.

How to Actually Process More Than You Could Before

The reason we're excited about the 1M token context window isn't the number. It's the workflows it unlocks. Here's how we're thinking about this for the kinds of businesses we work with.

Competitive Intelligence at Depth

Manually monitoring three or four competitors is a part-time job. Loading their entire published content libraries — blog posts, case studies, product pages, job listings, press releases — into a single Claude session and running a structured synthesis across all of them takes about twenty minutes to set up and produces analysis that would have taken a research analyst a week. Not just "here's what they're saying" but "here's the narrative they're building, here's the customer they're chasing, here's the gap you could own."

Customer Feedback at Scale

If you have six months of support tickets, survey responses, app reviews, or sales call transcripts, you have a dataset that knows more about your customers' real frustrations than any focus group you could run. The problem has always been synthesis — there's too much of it to read, and keyword analysis only gets you so far. A 1M context window lets you hand over the whole corpus and ask smarter questions: What do customers in the $50K+ segment complain about that lower-tier customers don't? Where do churned customers describe the moment they started considering alternatives? What feature gets mentioned positively in reviews but almost never in sales calls?

The Briefcase Principle

More data in doesn't automatically mean better answers out. The quality of your synthesis is directly proportional to the quality of your questions. Think of the context window like the well-packed suitcase: you're not bringing everything you own — you're bringing exactly what you need for the job, organized so you can find it. That means structured data loads, clear task framing, and specific output formats. "Summarize this" is not a prompt. "Identify the three recurring objections in these sales call transcripts and rank them by frequency and deal impact" is a prompt.

Building Repeatable Systems, Not One-Off Experiments

The teams getting the most out of these capabilities aren't using AI ad hoc. They've built prompt libraries — curated, tested templates for their recurring analytical tasks — that encode both the task and the quality standard in the request itself. A competitive brief prompt that your whole team runs the same way every month produces consistent, comparable outputs. An ad hoc session produces results nobody can build on. The investment in a prompt library pays back every time it's used.

Pack Smart, Fly Fast

The AI landscape has genuinely shifted in the last 90 days. Models got bigger and cheaper. Agents went from demos to deployable infrastructure. The enterprise tooling matured enough that "not ready for business use" is no longer a defensible objection. And the security risks got more concrete, more documented, and more active.

None of this requires you to change everything overnight. But it does require knowing what's in the suitcase. Here's what we'd suggest doing this week:

  • Audit what AI tools your team is actually using — consumer or enterprise tier, and what data they're feeding in.
  • Pick one analytical workflow that's currently manual and run a single structured test with Claude's extended context. Competitive review, customer feedback synthesis, campaign performance analysis — pick the one with the most obvious ROI.
  • If you're deploying any AI agent that processes external input, have an honest conversation about prompt injection exposure before it goes live.
  • Read the terms of the tools you're using. Seriously. It takes thirty minutes and it's the most important thing most teams aren't doing.

The goose with the suitcase doesn't bring everything. It brings what the trip requires — organized, deliberate, ready to move. That's the energy this moment calls for.

JM

Joe Malott

Founder, One Bad Goose

Joe helps organizations find the balance between emerging technology and data-driven decisions. He's been building digital products for over 15 years and still gets excited about a well-structured spreadsheet.