Scoring Malicious Intent: How Weak Signals Become Strong Conclusions

Why intent-based systems must think in probabilities, patterns, and accumulated evidence.

By Mark Symmarian, OpenEFA Engineer | January 29, 2026

Modern email attacks don't look like attacks anymore.

They don't come from shady servers.
They don't contain obvious malware.
They don't trip a single, clear rule.

They look almost normal.

And that's exactly the problem.

Today's most successful attacks — especially Business Email Compromise (BEC), invoice fraud, and executive impersonation — are designed to stay below the detection threshold of any single security signal.

Individually, nothing looks dangerous.
Collectively, everything is wrong.

This is why binary, rule-based, or single-classifier systems keep failing — and why intent-based systems must think in probabilities, patterns, and accumulated evidence instead of yes/no decisions.


The Problem With Binary Thinking in Email Security

Traditional email security works like this:

"If this condition is true, block the message."

Or:

"If the ML model says malicious, quarantine it."

This approach assumes attacks are obvious.

They are not anymore.

Modern attacks are:

Each individual signal looks fine.

The danger only appears when you connect the dots.


What Is a "Weak Signal"?

A weak signal is something that is slightly wrong, but not wrong enough to justify blocking on its own.

Examples:

Any one of these means nothing.

Several of them together mean everything.


How OpenEFA® Thinks About Signals

Instead of asking:

"Does this message violate a rule?"

OpenEFA asks:

"What does the total evidence suggest about the intent of this message?"

Signals come from multiple independent dimensions, including:

Identity & Authentication

Who is this really from? Does this align with historical identity patterns?

Context & Relationship

Is this message normal for these two people?

Linguistic & Psychological Cues

Is the language consistent with past behavior, or does it apply pressure, urgency, secrecy?

Behavioral Patterns

Is the timing, frequency, or targeting unusual?

Structural & Technical Characteristics

Is the message constructed in a way attackers typically construct messages?

Historical Trust & Reputation

Is this entity behaving consistently with its past?

None of these decide alone.

They contribute.


From Signals to Score

Instead of a single pass/fail decision, OpenEFA builds a multi-dimensional confidence score.

Conceptually:

The system doesn't ask:

"Is this malicious?"

It asks:

"Given everything we know, how likely is this to be malicious in intent?"

As evidence accumulates, confidence increases.

This is how humans reason.
This is how investigators reason.
This is how modern AI-based security must reason.


Why This Dramatically Reduces False Positives

Rule-based systems and simple classifiers suffer from a fundamental flaw:

They must trigger on something specific.

That means:

OpenEFA does neither.

Because:

This is why intent-based systems can be:


A Real-World Example (Simplified)

Imagine this email:

From: CEO Name
Subject: Quick favor

Are you available? I need you to take care of something discreet before the end of the day.

No malware.
No links.
No obvious red flags.
Authentication passes.

But…

Each signal alone is weak.

Together, they tell a very clear story.

The intent score rises — and the system intervenes.


This Is How Humans Actually Think

You don't decide something is suspicious because of one clue.

You decide because:

OpenEFA works the same way.


Why This Matters More Every Year

Attackers are no longer trying to break systems.

They are trying to convince people.

That means security systems must understand:

Not just files and links.


What Comes Next

If a system is making complex, multi-dimensional judgments, the next critical question is:

"How do you explain those decisions to humans?"

That's the topic of our next article:

Why Explainability Matters in AI-Driven Email Security

Because trust, compliance, and control require more than just "the AI said so."

← Back to Blog Index