Skip to content

~/findings

Findings

Standalone notes from projects and experiments. Some are stable references, others are living docs I update as I learn more. The yellow evolving badge marks the living ones.

$
grep -i "" ./findings/
$ls ./findings/ -la

Agent Trace Telemetry

evolving
updated Apr 10, 2026 · 6 min

What to measure about an agentic investigation loop, and how a trace explorer turns raw run data into evidence for the next prompt or harness change.

agentsobservabilitytelemetrylocal-llmAI Data Analyst

Autoresearch Harness Log

evolving
updated Apr 10, 2026 · 3 min

Working notes on how I use the autoresearch harness to probe agent workflows, find design holes, and decide what to experiment with next.

autoresearchagentsexperimentsharnessAutoresearch

Three Local Models Compared on One Investigation

updated Mar 31, 2026 · 6 min

Running Hermes 4 70B, Nemotron Cascade 30B, and GPT-OSS 20B against the same security investigation exposes a speed-vs-depth tradeoff that shows up clearly when tools are fast.

securityagentslocal-llmmodel-comparisonAI Data Analyst

Agent Investigation With Query Tools

updated Jan 5, 2026 · 5 min

Giving a 7B model two query tools and a 5 W's output format is enough to find attacks on a raw auth.log. The architecture beats dumping the logs into the prompt.

securityagentslocal-llminvestigationAI Data Analyst

Entity Profiling Over Anomaly Flagging

updated Jan 5, 2026 · 4 min

Message-centric anomaly detection flags 269,000 'rare' events on 86,000 auth.log records. Entity profiling asks a different question and produces actionable intelligence instead.

architecturesecuritydetectionagentsAI Data Analyst

Deterministic Validation for LLM Output

updated Jan 3, 2026 · 4 min

Schema-based validation catches the variance an LLM data cleaner produces between runs. Pattern: deterministic where you can, LLM where you must.

local-llmvalidationarchitectureAI Data Analyst

Local LLM Security Agent on Consumer Hardware

updated Jan 2, 2026 · 4 min

Running a security investigation agent on a 16GB consumer GPU with llama.cpp, the OpenAI-compatible API, and a small 7B model.

local-llmllama.cppsecurityhardwareAI Data Analyst
Findings | Aaron Hays