// Latest Posts

2026 stack review: what stayed, what changed

A yearly-ish inventory of the grid: the boring pieces that stuck (systemd, nginx), the LLM bits that changed, and the operational habits that mattered more than new toys.

ZFS snapshots: rollback without panic

How I use ZFS snapshots in the grid: dataset layout, naming, pruning, and the important part, testing restores before trusting them. Because hoping is not a backup strategy.

Proxmox notes: small habits that make the cluster feel calm

Proxmox installs fast. Living with it takes a few habits. Here's what stopped me from guessing: naming, storage layout, backups, networking, and CLI snippets that actually work.

Defensive Bash: scripts that survive 3AM

A grab bag of guardrails I keep re-learning in Bash: strict mode, traps, quoting, idempotency, and how to make a script fail loudly instead of quietly ruining your night. Your script should scream when it dies, not ghost you at 3AM while production burns.

Python Tooling in the Grid: uv + ruff

I got tired of Python env drift breaking my LLM experiments. In my lab, uv plus ruff brought back the feeling of a clean, repeatable workflow. Here are the patterns I kept.

Prompt Hygiene: Keeping the Grid Sane

In my lab, prompt hygiene is not about poetry. It is about repeatability, safety, and not accidentally turning every run into a new experiment. These are the rules that helped.

Tool Calling Offline: Patterns That Survived My Lab

Tool calling is where local models become useful - and where they can trash your filesystem if you're not careful. Notes on the guardrails I use so an offline agent stays a helper, not a chaos monkey.

RAG at Home: Notes from the Grid

I tried to build a small RAG stack at home and learned that retrieval is an ops problem disguised as an AI problem. Here is what I indexed, what I measured, and what fell apart.

Speculative Decoding (a homelab reality check)

Speculative decoding promises free speed. In my lab it delivered speed sometimes, complexity always. Notes on when it helps, what to watch, and how I keep it from becoming a science project.

GGUF Quantization Notes (without lying to myself)

Quantization is the cheat code that makes local models usable, but it is also the fastest way to confuse yourself. These are my lab notes on picking a quant, validating it, and avoiding benchmark brain.

Ollama vs llama.cpp vs vLLM (in my lab)

Three runners, three different kinds of pain. Notes from my grid on what each one is good at, what they hide from you, and how I pick the least-wrong option per workload.

Booting Up the Grid: My Local LLM Lab (2025)

I finally stopped 'keeping it simple' and built a local LLM lab: one Proxmox node, a handful of GGUF models, and a boring stack that I can actually operate. Here's what worked, what broke, and how I measure whether the grid is healthy.

+-----------------------------------------------------------------------------+
|                                                                             |
|    [80x24] 80x24.ch    local LLMs  *  homelab ops  *  scripting            |
|                                                                             |
|                          +===============================+                   |
|                          |  LOCAL LLMS • HOMELAB • OPS  |                   |
|                          +===============================+                   |
|                                                                             |
|                              80x24.ch                                       |
|                             neo@80x24.ch                                    |
|                                                                             |
+-----------------------------------------------------------------------------+