Learn in Public unlocks on Jan 1, 2026
This lesson will be public then. Admins can unlock early with a password.
Bug Bounty Automation with Rust (2026 Guide)
Build Rust automation for bug bounty recon: dir busting, screenshots, and fast port sweeps—plus how platforms detect abuse.
Use Rust to automate recon without breaking program rules. This step-by-step lab builds a small, rate-aware directory brute-forcer against a safe local target, with validation, safety gates, and cleanup.
Architecture (ASCII)
┌────────────────────┐
│ wordlist.txt │
└─────────┬──────────┘
│
┌─────────▼──────────┐
│ dirbuster-rs (Rust)│
│ concurrency + delay│
└─────────┬──────────┘
│ HTTP GETs
┌─────────▼──────────┐
│ Mock Target (Py) │
└─────────┬──────────┘
│ Responses
┌─────────▼──────────┐
│ results.csv │
└────────────────────┘
What You’ll Build
- A local mock target served by Python.
- A Rust async dir-buster with concurrency limits, custom User-Agent, and CSV logging.
- Validation checks after each phase plus quick defenses to avoid bans.
Prerequisites
- macOS or Linux with Rust 1.80+ (
rustc --versionto confirm). - Python 3.10+ for the mock target.
- Cargo, Git, and 200 MB free disk.
- Run only against assets you own or have written permission to test.
Safety and Legal
- Stay inside authorized scopes. Do not aim scanners at third-party hosts without written approval.
- Keep concurrency conservative; stop on HTTP 429/403 storms.
- Tag your traffic (User-Agent + contact email) for transparency.
Step 1) Set up a safe mock target
Click to view commands
python3 --version
mkdir -p mock_target/{admin,reports,uploads}
echo "ok" > mock_target/index.html
echo "secret" > mock_target/admin/panel.html
echo "report list" > mock_target/reports/list.html
python3 -m http.server 8000 --directory mock_target > mock_target/server.log 2>&1 &
Common fix: If the server fails to start, ensure port 8000 is free or change to 8001 in both server and scanner configs.
Step 2) Scaffold the Rust project
Click to view commands
rustc --version
cargo new dirbuster-rs
cd dirbuster-rs
Step 3) Add dependencies and wordlist
Cargo.toml (replace contents):
Click to view toml code
[package]
name = "dirbuster-rs"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { version = "1.40", features = ["full"] }
reqwest = { version = "0.12", features = ["json", "gzip", "brotli", "stream", "rustls-tls"] }
futures = "0.3"
clap = { version = "4.5", features = ["derive"] }
csv = "1.3"
indicatif = "0.17"
anyhow = "1.0"
Add a minimal wordlist:
Click to view commands
cat > wordlist.txt <<'LIST'
/
/admin
/admin/panel.html
/reports
/reports/list.html
/uploads
/doesnotexist
LIST
Step 4) Implement the scanner
Replace src/main.rs with:
Click to view Rust code
use clap::Parser;
use futures::stream::{self, StreamExt};
use indicatif::{ProgressBar, ProgressStyle};
use reqwest::{Client, StatusCode};
use std::{fs::File, time::Duration};
#[derive(Parser, Debug)]
#[command(author, version, about)]
struct Args {
/// Base URL (e.g., http://127.0.0.1:8000)
#[arg(long)]
base: String,
/// Wordlist file
#[arg(long, default_value = "wordlist.txt")]
wordlist: String,
/// Max concurrent requests
#[arg(long, default_value_t = 5)]
concurrency: usize,
/// Delay in ms between batches
#[arg(long, default_value_t = 50)]
delay_ms: u64,
/// Output CSV
#[arg(long, default_value = "results.csv")]
out: String,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Args::parse();
let paths = std::fs::read_to_string(&args.wordlist)?;
let targets: Vec<String> = paths
.lines()
.filter(|l| !l.trim().is_empty())
.map(|p| format!("{}{}", args.base.trim_end_matches('/'), p))
.collect();
let client = Client::builder()
.user_agent("dirbuster-rs (+you@example.com)")
.timeout(Duration::from_secs(10))
.build()?;
let pb = ProgressBar::new(targets.len() as u64);
pb.set_style(
ProgressStyle::with_template(
"{spinner:.green} [{elapsed_precise}] [{wide_bar}] {pos}/{len} ({per_sec})",
)?
.progress_chars("#>-"),
);
let file = File::create(&args.out)?;
let mut wtr = csv::Writer::from_writer(file);
wtr.write_record(["url", "status", "len_bytes"])?;
stream::iter(targets)
.map(|url| {
let client = client.clone();
async move {
let resp = client.get(&url).send().await;
match resp {
Ok(r) => {
let status = r.status();
let len = r.content_length().unwrap_or(0);
Some((url, status, len))
}
Err(_) => None,
}
}
})
.buffer_unordered(args.concurrency)
.for_each(|res| {
pb.inc(1);
if let Some((url, status, len)) = res {
// treat 200/301/302 as interesting
if matches!(status, StatusCode::OK | StatusCode::MOVED_PERMANENTLY | StatusCode::FOUND)
{
println!("{status} {len:>6} {url}");
}
// write all responses for audit
let _ = wtr.write_record([url, status.as_str().to_string(), len.to_string()]);
}
tokio::time::sleep(Duration::from_millis(args.delay_ms)).await;
futures::future::ready(())
})
.await;
wtr.flush()?;
pb.finish_with_message("done");
Ok(())
}
Common fixes:
- TLS errors: use
http://127.0.0.1:8000(our mock is HTTP). If you must use HTTPS, ensure certs are valid or adddanger_accept_invalid_certsonly in local tests. - Linker errors on macOS: install Xcode Command Line Tools (
xcode-select --install).
Step 5) Run the scanner against the mock target
Click to view commands
cargo run --release -- --base http://127.0.0.1:8000 --concurrency 4 --delay-ms 25
Click to view code code
200 3 http://127.0.0.1:8000/
200 6 http://127.0.0.1:8000/admin/panel.html
200 12 http://127.0.0.1:8000/reports/list.html
Step 6) Add safety controls for real targets
- Real-world safe defaults:
--concurrency 3-6,--delay-ms 50-200, random jitter ±25%, auto-stop after 15 consecutive 429s, and per-domain scope validation. - Default to low concurrency (
--concurrency 3-5) and add jitter (--delay-ms 25-100). - Stop if you see many 429/403 responses; back off or email the program’s abuse contact.
- Include contact info in User-Agent; keep CSV logs for disclosure reports.
- Respect scope: load your allowed domains into the tool and refuse out-of-scope hosts.
- Warning: do NOT add recursive enumeration against real targets without explicit permission—runaway recursion is a top cause of bans.
How Platforms Detect Automated Abuse
- Sustained spikes from the same IP or token.
- High-concurrency scans with no delay or jitter.
- Repeated 404/403 storms and failed probes.
- User-Agents matching known scanners or missing contact info.
- Requests outside declared scope or to many unrelated hosts.
- Correlated activity patterns across multiple researchers.
HTTP Status Quick Reference
| Status | Meaning | Log? | Action |
|---|---|---|---|
| 200 | Page exists | Yes | Human review |
| 301/302 | Redirect | Yes | Follow manually |
| 403 | Forbidden | Yes | Stop if too many |
| 404 | Not found | No | Normal noise |
| 429 | Rate limited | Critical | Back off immediately |
Cleanup
Click to view commands
cd ..
pkill -f "http.server 8000" || true
rm -rf dirbuster-rs mock_target
Quick Reference
- Always get written scope approval before scanning.
- Keep concurrency low; stop on rate-limit signals.
- Log everything (CSV) to prove responsible testing.
- Tag traffic with contact info to reduce blocks and ease triage.
Next Steps
- Add HTML size heuristics to flag likely login pages.
- Add screenshot automation (e.g., chromiumoxide) for visual diffs.
- Implement retry/backoff logic and random jitter.
- Export results to JSON + a simple HTML dashboard.
- Add proxy support (Burp/TLS interception) for manual follow-up.
- Integrate CSV with HackerOne/Jira report templates.