Our mission is to provide comprehensive and actionable intelligence to businesses, government agencies, and private clients.
With a team of experienced intelligence collectors and analysts, many with backgrounds in intelligence services, military, law enforcement, and academia, we are committed to delivering insights that drive informed decision-making.
Share
Your AI Is Not an Analyst. Stop Treating It Like One.
Published 14 days ago • 4 min read
Dear Decision Maker,
I have to be honest with you about something.
Every week, I see another headline about how AI is going to "revolutionise" intelligence analysis.
Another conference panel.
Another vendor promising that their large language model will replace your analysts and deliver decision advantage on a silver platter.
And every week, I watch seasoned professionals nod along like this is inevitable.
This further is strengthened by stories like the US Department of War alleged use Anthropic's Claude AI (My personal favourite) to plan and execute Operation Absolute Resolve.
Aka the capture of Nicholás Maduro in his fortified bunker in Caracas, Venezuela.
But, here's what they're not telling you.
We use AI extensively at Grey Dynamics. I'm not going to pretend otherwise.
We experiment constantly; we push it, we test it, we see how far it can go.
I'm excited about what's happening in this space.
But I'm also thinking out loud right now because this is stuff that's been sitting in my mind for a while, and I think it needs to be said:
The accountability in your analysis needs to come from you.
Your assessment is shaped by:
Your experiences
Your training
Your worldview
Your media diet
And that is incredibly difficult for large language models to grasp.
Maybe we'll get there. The state of the art is moving fast; it feels like every week there is a new leap. But right now?
If you're outsourcing your analytical judgement to a chatbot, you're not doing intelligence.
You're doing something else.
What AI Actually Does Well
Look, I'm not here to bash the technology.
We use it every day.
But you have to understand what it's good at versus what people are pretending it can do.
Where AI delivers real value:
Triage: Clustering thousands of incoming reports by topic, flagging anomalies that would take a human days to spot
Translation: A blog post in Farsi that reveals protest planning? Translated instantly. An analyst who would never have seen it now has it on their desk
Summarisation: Distilling fifty open-source reports across three languages down to core developments; a genuine force multiplier
Stress-testing: If you learn how to prompt properly, you can use AI as a sparring partner. Challenge your own assumptions. Ask it to argue the opposite case
That's real value. We use it for exactly this: first-pass triage so our analysts can spend their time on what actually requires a human brain.
Where It Falls Apart
Here's the ground truth.
An LLM cannot call a source in Khartoum and ask them what the mood is on the street.
It cannot sit across from a commander and read the body language that tells you more than the briefing ever will.
It cannot pick up the phone to a contact in Abuja and hear the hesitation in their voice when they say "everything is fine."
This is the part people keep skipping over.
The intelligence community, both government and private sector, is so enamoured with the technology that they're forgetting the thing that has always made intelligence work: Human networks.
HUMINT beats OSINT when it matters most.
I know that's not a popular thing to say in 2026 when everyone is in love with their dashboards.
But when a client needs to understand whether a deal in a volatile market is going to hold or collapse, the answer does not live in a data set.
It lives in a conversation with someone who is in the room.
I've talked about this before and I'm sorry if I sound like a broken record.
But it is important.
AI gets you maybe 80% of the way there. The last 20%, the nuance, the context, the cultural intelligence, the stuff that actually determines whether your assessment is right or wrong: That requires a human.
Plan accordingly.
The Real Danger Nobody's Talking About
And I think this is the most important point, so please don't skip this part.
The real risk of AI in intelligence is not that it's inaccurate.
It's that it's confidently inaccurate.
"A model that hallucinates a fact and presents it with the same tone and polish as a verified one is more dangerous than having no information at all."
Why? Because at least with a gap in your intelligence, you know you don't know.
With a hallucination dressed up as analysis, you think you know. And you're wrong.
This is compounded by confirmation bias. We talk about this a lot with young analysts.
If you're already inclined to believe a certain narrative; if your media diet is pushing you in one direction; if you're comfortable in your echo chamber, and then an AI reinforces that exact narrative back to you, packaged beautifully with citations?
You're not doing analysis. You're decorating your assumptions.
So What Should You Actually Do?
Use AI as a force multiplier for the grunt work.
The practical breakdown:
Let AI handle: translation, triage, clustering, summarisation, initial research sweeps
Keep human: analytical judgement, source verification, cultural context, relationship intelligence
Use AI as a sparring partner: challenge your own thinking, not as a replacement for it
Always verify: cross-reference what it tells you with your sources and your networks
This is exactly what why we want to teach this in the Grey Dynamics Intelligence School.
Our upcoming Operational OSINT course walks through all twelve modules of how to actually integrate these tools into the intelligence cycle without losing the thing that makes you valuable.
Because the analysts who will thrive are not the ones who become dependent on AI.
They're the ones who become more dangerous with it.
Information costs money. Intelligence makes money. Don't let a machine confuse the two for you.
Ahmed Hassan CEO Grey Dynamics Where headlines end, ground truth begins
Most people scroll. Professionals structure. The Intelligence Cycle Fundamentals Program teaches you how to analyse threats, map influence, and predict the next move—with zero prior intel background.
Our mission is to provide comprehensive and actionable intelligence to businesses, government agencies, and private clients.
With a team of experienced intelligence collectors and analysts, many with backgrounds in intelligence services, military, law enforcement, and academia, we are committed to delivering insights that drive informed decision-making.
Dear Decision Maker Most intelligence education teaches you what Washington and London discovered. Almost none teaches you what Nairobi, Bogotá, Islamabad, and Jakarta have known for decades. Here is the uncomfortable truth. The most innovative human intelligence tradecraft is not coming from Five Eyes headquarters. It is being born in places where there are no billion-dollar satellite budgets. No sprawling signals intercept stations. No AI-powered surveillance grids. Just people. Talking to...
Dear Decision Maker, I want to talk about something today that I have been thinking about for a while. And I think this week's news gives me the perfect excuse to finally say it. Three stories broke this week. Different continents. Different actors. But if you look at what actually happened, not the headlines, but the ground truth of how these operations started, they all have the same thing at the centre. Russia Sharing Targeting intelligence with Iran So. The Washington Post reported that...
Dear Decision Maker I've been building intelligence systems for over a decade. For governments. For private equity firms. For executives making decisions in places most analysts won't go. And the same problem keeps showing up. You're drowning in data. Reports stack up. Dashboards flash. Alerts ping. Your team forwards the executive summary and hopes for the best. But none of it actually helps you decide. "That's not intelligence. That's noise with a logo on it." So I built something to fix...