product

Deep Dive: How AI Analysis Catches What Rules Miss

An inside look at how Sentinel Nerd's AI analysis engine uses GPT-4 to explain threats, map to MITRE ATT&CK, and reduce alert fatigue for UniFi administrators.

TM

Tony Martinez

#ai #gpt-4 #analysis #features

Detection rules are great at catching known patterns. But what about the alerts that match a rule but need human judgment to assess? Or the subtle indicators that don’t fit any existing rule? That’s where Sentinel Nerd’s AI analysis engine comes in.

In this deep dive, we’ll explain exactly how AI analysis works, what it can and can’t do, and how it’s helping UniFi administrators cut through alert noise to focus on real threats.

The Problem: Alert Fatigue

Every UniFi IDS/IPS deployment generates alerts. Lots of them. A typical small business UniFi network produces 50-200 IDS alerts per day. A larger deployment might see thousands.

Most of these are false positives or low-priority events. But buried in the noise are the alerts that actually matter — the ones that indicate an active attack or a compromised device.

The challenge isn’t detection. It’s triage. Security teams spend hours reviewing alerts, googling signature IDs, checking IP reputation, and trying to determine if an alert is a real threat or harmless noise. That’s time better spent on actual security improvements.

How AI Analysis Works

When you trigger AI analysis on an alert in Sentinel Nerd, here’s what happens:

1. Context Assembly

The system gathers all relevant context about the alert:

  • Alert metadata — Rule name, severity, timestamp, source/destination
  • Event data — The raw event that triggered the alert, including packet metadata
  • Threat intelligence — IP reputation scores, GeoIP data, ASN information, known threat associations
  • Historical context — Has this source IP triggered alerts before? How often? What types?
  • Device profile — What device generated or received this traffic? What’s its normal behavior?
  • Network context — Which VLAN, what other recent events from the same segment?

2. AI Processing

This assembled context is sent to GPT-4 with a carefully engineered prompt that asks for:

  • A plain-English explanation of what happened
  • Severity assessment with reasoning
  • MITRE ATT&CK technique mapping
  • Potential impact assessment
  • Recommended response actions
  • False positive likelihood

The prompt is tuned specifically for UniFi environments. It understands UniFi-specific event types, knows the difference between a Protect motion alert and a Network IDS event, and provides UniFi-specific remediation steps.

3. Result Caching

Analysis results are cached so that similar alerts don’t require re-analysis. If you’ve already analyzed a particular type of SSH brute force from a specific ASN, future similar alerts reference the cached analysis. This keeps costs down and response times fast.

Cache entries expire based on configurable TTLs and are invalidated when threat intelligence data changes.

MITRE ATT&CK Mapping

Every AI analysis maps the alert to relevant MITRE ATT&CK techniques. This is critical for:

  • Understanding the attack phase — Is this reconnaissance, initial access, lateral movement, or exfiltration?
  • Predicting next steps — If you see technique T1110 (Brute Force), what typically comes next?
  • Compliance reporting — Many frameworks require ATT&CK mapping for incident documentation
  • Team communication — ATT&CK provides a shared vocabulary for discussing threats

Example mapping from an AI analysis:

MITRE ATT&CK Techniques:

  • T1110.001 — Brute Force: Password Guessing (Primary)
  • T1078 — Valid Accounts (Objective)
  • T1021.004 — Remote Services: SSH (Execution method)

Kill Chain Phase: Initial Access

Analysis: This alert indicates an active password guessing attack against your SSH service. The attacker is in the Initial Access phase, attempting to obtain valid credentials. If successful, the next expected techniques would be T1078 (using valid accounts for persistence) followed by T1083 (file and directory discovery) for internal reconnaissance.

Real-World Examples

Case Study 1: The Printer That Called Home

A medium alert flagged unusual DNS queries from a network printer on VLAN 40 (IoT). The IDS rule matched on “ET POLICY DNS Query to DynDNS,” which is often a false positive.

AI analysis revealed:

Assessment: Likely Compromised Device

This network printer (HP LaserJet, MAC: AA:BB:CC:DD:EE:FF) is making DNS queries to a dynamic DNS domain (d4rk.dyndns.org) every 30 seconds. This pattern is consistent with a command-and-control beacon. HP LaserJet printers of this model series were affected by CVE-2023-XXXX, which allows remote code execution via a crafted print job.

Recommended actions:

  1. Immediately isolate this device by moving it to a quarantine VLAN
  2. Check the printer’s firmware version and update if vulnerable
  3. Review print job logs for suspicious submissions
  4. Block the DynDNS domain at the firewall
  5. Scan other printers of the same model on your network

Without AI analysis, this alert might have been dismissed as a false positive. The AI connected the dots between the printer model, the DNS pattern, and known vulnerabilities.

Case Study 2: Legitimate Traffic Flagged as Malicious

A high-severity alert fired for “ET TROJAN Generic - POST To gate.php” from an internal workstation. This looked alarming at first glance.

AI analysis determined:

Assessment: False Positive (High Confidence)

The destination IP (203.0.113.50) hosts a legitimate SaaS application (Acme CRM) that uses a PHP backend. The URI pattern /api/gate.php matches the Emerging Threats signature for generic trojan POST requests, but the destination is a well-known commercial service with a clean reputation score (AbuseIPDB: 0, VirusTotal: 0/94).

The traffic pattern (POST requests every 5 minutes with small payloads) is consistent with normal CRM sync activity.

Recommended actions:

  1. Create a suppression rule for this specific destination to reduce noise
  2. No further investigation needed

This saved the admin 30 minutes of investigation and prevented an unnecessary escalation.

Case Study 3: Coordinated Attack Across Sources

Three separate medium alerts fired within 2 minutes: a failed UniFi Access door entry, a new device connection on the corporate VLAN, and an IDS alert for internal port scanning. Individually, each was medium severity.

The admin ran AI analysis on the cluster:

Assessment: Coordinated Physical-Digital Attack (Critical)

These three events are temporally correlated and likely represent a coordinated attack:

  1. Failed badge attempt at the server room door (unknown credential)
  2. 90 seconds later, a new device (Raspberry Pi, based on MAC OUI) connects to the corporate VLAN via Ethernet
  3. 30 seconds after connection, the device begins scanning internal ports

This pattern suggests an attacker who failed to gain physical access to the server room, then plugged a rogue device into an accessible network port to attempt digital access instead.

MITRE ATT&CK: T1200 (Hardware Additions), T1046 (Network Service Discovery)

Recommended actions:

  1. Immediately block the rogue device’s MAC address
  2. Identify which physical port it’s connected to and disable it
  3. Review security camera footage around the server room for the last 15 minutes
  4. Check all other network ports in the area for unauthorized devices
  5. Consider enabling 802.1X port authentication

This is the kind of insight that’s nearly impossible to get from rules alone. The AI connected events across three different UniFi products to identify a coordinated attack pattern.

Accuracy and False Positives

We continuously measure AI analysis accuracy:

  • Correct severity assessment: 91% agreement with expert human review
  • Correct threat/benign classification: 94% accuracy
  • Actionable recommendations: 87% of recommendations rated as useful by administrators
  • MITRE ATT&CK mapping: 89% accuracy when validated against confirmed incidents

The 6-9% error rate is why AI analysis is an aid, not a replacement for human judgment. It’s designed to accelerate your triage process and surface important context, not to make final decisions autonomously.

Privacy and Data Handling

We take data privacy seriously in AI analysis:

  • No raw packet data is sent to the AI model — only alert metadata, threat intelligence, and device context
  • PII is stripped before analysis — MAC addresses and internal IPs are anonymized
  • Results are stored in your Sentinel Nerd instance, not retained by the AI provider
  • Opt-in per alert — AI analysis only runs when you explicitly request it (or configure automatic analysis for specific severity levels)
  • Data residency — Enterprise customers can choose their data processing region

Full details are in our privacy policy and security practices.

Getting Started

AI analysis is available on all plans:

  • Starter — 50 analyses per month
  • Pro — 500 analyses per month
  • Enterprise — Unlimited analyses

To use it, click the Analyze with AI button on any alert in your dashboard, or configure automatic analysis in Settings > AI Analysis for specific severity levels or rule categories.


AI analysis doesn’t replace your security expertise — it amplifies it. By handling the initial triage and context assembly, it frees you to focus on what humans do best: making judgment calls about risk and response.

Try it on your noisiest alert category first. You’ll be surprised how many of those medium-severity alerts have been wasting your time — and how the occasional real threat hidden in the noise suddenly becomes obvious.

Share this article

Ready to secure your UniFi network?

Start your free 14-day trial today. No credit card required.

Start Free Trial