Learn in Public unlocks on Jan 1, 2026

This lesson will be public then. Admins can unlock early with a password.

Rust + AI: Building Intelligent Security Automation Tools
Learn Cybersecurity

Rust + AI: Building Intelligent Security Automation Tools

Combine Rust performance with AI APIs to analyze logs, enrich alerts, and automate response—safely.

rust ai security automation log analysis incident response

Use Rust for stable pipelines and call AI only with strong guardrails. This lab builds a small, local enrichment CLI with a dry-run summarizer so you can validate flows without real API calls.

What You’ll Build

  • A Rust CLI that ingests alerts from JSONL, applies a deterministic “AI” summary (offline), and prints enriched alerts.
  • Hooks for real AI providers with clear guardrails (token control, rate limits).
  • Validation, common fixes, and cleanup.

Prerequisites

  • macOS or Linux with Rust 1.80+.
  • No external API required (offline summarizer). If you later call a provider, you’ll need network access and an API token.
  • Never send production alerts to third-party AI without a data-sharing agreement.
  • Strip PII/secrets before prompts; keep logs encrypted and access-controlled.
  • Keep humans in the approval loop for any response actions.

Step 1) Prepare sample alerts

Click to view commands
cat > alerts.jsonl <<'JSONL'
{"id":"a-1","source":"crowdstrike","severity":"high","message":"Outbound connection to 198.51.100.10 from temp binary."}
{"id":"a-2","source":"siem","severity":"medium","message":"Multiple failed logins for user alice from 203.0.113.5"}
JSONL
Validation: `wc -l alerts.jsonl` should show 2.

Step 2) Create the Rust project

Click to view commands
cargo new rust-ai-enricher
cd rust-ai-enricher
Validation: `ls` shows `Cargo.toml` and `src/main.rs`.

Step 3) Add dependencies

Replace Cargo.toml with:

Click to view toml code
[package]
name = "rust-ai-enricher"
version = "0.1.0"
edition = "2021"

[dependencies]
tokio = { version = "1.40", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
clap = { version = "4.5", features = ["derive"] }
anyhow = "1.0"
Validation: `cargo check` should pass after the code is added.

Step 4) Implement offline enrichment with a hook for real AI

Replace src/main.rs with:

Click to view Rust code
use clap::Parser;
use serde::{Deserialize, Serialize};
use std::fs::File;
use std::io::{BufRead, BufReader};

#[derive(Deserialize, Serialize, Debug)]
struct Alert {
    id: String,
    source: String,
    severity: String,
    message: String,
}

#[derive(Serialize, Debug)]
struct EnrichedAlert {
    id: String,
    severity: String,
    message: String,
    summary: String,
    next_steps: Vec<String>,
}

#[derive(Parser, Debug)]
#[command(author, version, about)]
struct Args {
    /// Path to alerts JSONL
    #[arg(long, default_value = "../alerts.jsonl")]
    file: String,
    /// Dry-run (use offline summary instead of calling an API)
    #[arg(long, default_value_t = true)]
    dry_run: bool,
}

fn offline_summarize(alert: &Alert) -> EnrichedAlert {
    let summary = format!("{}: {}", alert.severity, alert.message);
    let mut steps = vec!["Validate legitimacy with logs".to_string()];
    if alert.severity.to_lowercase() == "high" {
        steps.push("Isolate host or block destination pending review".to_string());
    }
    steps.push("Document findings and ticket owner".to_string());
    EnrichedAlert {
        id: alert.id.clone(),
        severity: alert.severity.clone(),
        message: alert.message.clone(),
        summary,
        next_steps: steps,
    }
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let args = Args::parse();
    let file = File::open(&args.file)?;
    let reader = BufReader::new(file);

    for line in reader.lines() {
        let line = line?;
        if line.trim().is_empty() {
            continue;
        }
        let alert: Alert = serde_json::from_str(&line)?;

        // Hook: replace this block with a real AI call (signed, rate-limited, logged)
        let enriched = offline_summarize(&alert);

        println!("{}", serde_json::to_string_pretty(&enriched)?);
    }
    Ok(())
}
Validation:
Click to view commands
cargo run -- --file ../alerts.jsonl
Expected: Two enriched alerts printed with summaries and next_steps.

Common fixes:

  • Path errors: ensure alerts.jsonl is at ../alerts.jsonl or pass --file with the correct path.
  • JSON errors: verify each line in alerts.jsonl is valid JSON.

Step 5) Guardrails for real AI calls

  • Store AI_TOKEN in a secret store or env; never hardcode.
  • Add per-tenant rate limits and max prompt size; reject prompts containing secrets/PII.
  • Log every call: model, prompt hash (not raw prompt), response hash, latency.
  • Keep temperature low (≤0.3) and add deterministic fallbacks when the model fails or times out.
  • Human-in-the-loop: require analyst approval before actions like blocking accounts or hosts.

Detection of malicious automation

  • Alert on unusual AI API usage (new hosts/service accounts, volume spikes, or prompts containing credentials).
  • Correlate JA3/User-Agent of AI clients; rotate keys that appear in abuse.

Cleanup

Click to view commands
cd ..
rm -rf rust-ai-enricher alerts.jsonl
Validation: `ls rust-ai-enricher` should fail with “No such file or directory”.

Quick Reference

  • Build ingestion + enrichment in Rust; call AI only with strict guardrails.
  • Start with offline/dry-run summaries to verify flows before hitting providers.
  • Log everything (prompt hash, model, latency) and keep humans approving actions.

Similar Topics

FAQs

Can I use these labs in production?

No—treat them as educational. Adapt, review, and security-test before any production use.

How should I follow the lessons?

Start from the Learn page order or use Previous/Next on each lesson; both flow consistently.

What if I lack test data or infra?

Use synthetic data and local/lab environments. Never target networks or data you don't own or have written permission to test.

Can I share these materials?

Yes, with attribution and respecting any licensing for referenced tools or datasets.