Why intent-based systems must think in probabilities, patterns, and accumulated evidence.
By Mark Symmarian, OpenEFA Engineer | January 29, 2026
Modern email attacks don't look like attacks anymore.
They don't come from shady servers.
They don't contain obvious malware.
They don't trip a single, clear rule.
They look almost normal.
And that's exactly the problem.
Today's most successful attacks — especially Business Email Compromise (BEC), invoice fraud, and executive impersonation — are designed to stay below the detection threshold of any single security signal.
Individually, nothing looks dangerous.
Collectively, everything is wrong.
This is why binary, rule-based, or single-classifier systems keep failing — and why intent-based systems must think in probabilities, patterns, and accumulated evidence instead of yes/no decisions.
Traditional email security works like this:
"If this condition is true, block the message."
Or:
"If the ML model says malicious, quarantine it."
This approach assumes attacks are obvious.
They are not anymore.
Modern attacks are:
Each individual signal looks fine.
The danger only appears when you connect the dots.
A weak signal is something that is slightly wrong, but not wrong enough to justify blocking on its own.
Examples:
Any one of these means nothing.
Several of them together mean everything.
Instead of asking:
"Does this message violate a rule?"
OpenEFA asks:
"What does the total evidence suggest about the intent of this message?"
Signals come from multiple independent dimensions, including:
Who is this really from? Does this align with historical identity patterns?
Is this message normal for these two people?
Is the language consistent with past behavior, or does it apply pressure, urgency, secrecy?
Is the timing, frequency, or targeting unusual?
Is the message constructed in a way attackers typically construct messages?
Is this entity behaving consistently with its past?
None of these decide alone.
They contribute.
Instead of a single pass/fail decision, OpenEFA builds a multi-dimensional confidence score.
Conceptually:
The system doesn't ask:
"Is this malicious?"
It asks:
"Given everything we know, how likely is this to be malicious in intent?"
As evidence accumulates, confidence increases.
This is how humans reason.
This is how investigators reason.
This is how modern AI-based security must reason.
Rule-based systems and simple classifiers suffer from a fundamental flaw:
They must trigger on something specific.
That means:
OpenEFA does neither.
Because:
This is why intent-based systems can be:
Imagine this email:
No malware.
No links.
No obvious red flags.
Authentication passes.
But…
Each signal alone is weak.
Together, they tell a very clear story.
The intent score rises — and the system intervenes.
You don't decide something is suspicious because of one clue.
You decide because:
OpenEFA works the same way.
Attackers are no longer trying to break systems.
They are trying to convince people.
That means security systems must understand:
Not just files and links.
If a system is making complex, multi-dimensional judgments, the next critical question is:
"How do you explain those decisions to humans?"
That's the topic of our next article:
Why Explainability Matters in AI-Driven Email Security
Because trust, compliance, and control require more than just "the AI said so."
← Back to Blog Index