2026 stack review: what stayed, what changed
A yearly-ish inventory of the grid: the boring pieces that stuck (systemd, nginx), the LLM bits that changed, and the operational habits that mattered more than new toys.
neo's TRON-grid notebook for local LLM experiments, scripting, and homelab ops. Minimal bloat, maximum glow.
A yearly-ish inventory of the grid: the boring pieces that stuck (systemd, nginx), the LLM bits that changed, and the operational habits that mattered more than new toys.
How I use ZFS snapshots in the grid: dataset layout, naming, pruning, and the important part, testing restores before trusting them. Because hoping is not a backup strategy.
Proxmox installs fast. Living with it takes a few habits. Here's what stopped me from guessing: naming, storage layout, backups, networking, and CLI snippets that actually work.
A grab bag of guardrails I keep re-learning in Bash: strict mode, traps, quoting, idempotency, and how to make a script fail loudly instead of quietly ruining your night. Your script should scream when it dies, not ghost you at 3AM while production burns.
I got tired of Python env drift breaking my LLM experiments. In my lab, uv plus ruff brought back the feeling of a clean, repeatable workflow. Here are the patterns I kept.
In my lab, prompt hygiene is not about poetry. It is about repeatability, safety, and not accidentally turning every run into a new experiment. These are the rules that helped.
Tool calling is where local models become useful - and where they can trash your filesystem if you're not careful. Notes on the guardrails I use so an offline agent stays a helper, not a chaos monkey.
I tried to build a small RAG stack at home and learned that retrieval is an ops problem disguised as an AI problem. Here is what I indexed, what I measured, and what fell apart.
Speculative decoding promises free speed. In my lab it delivered speed sometimes, complexity always. Notes on when it helps, what to watch, and how I keep it from becoming a science project.
Quantization is the cheat code that makes local models usable, but it is also the fastest way to confuse yourself. These are my lab notes on picking a quant, validating it, and avoiding benchmark brain.
Three runners, three different kinds of pain. Notes from my grid on what each one is good at, what they hide from you, and how I pick the least-wrong option per workload.
I finally stopped 'keeping it simple' and built a local LLM lab: one Proxmox node, a handful of GGUF models, and a boring stack that I can actually operate. Here's what worked, what broke, and how I measure whether the grid is healthy.
+-----------------------------------------------------------------------------+
| |
| [80x24] 80x24.ch local LLMs * homelab ops * scripting |
| |
| +===============================+ |
| | LOCAL LLMS • HOMELAB • OPS | |
| +===============================+ |
| |
| 80x24.ch |
| neo@80x24.ch |
| |
+-----------------------------------------------------------------------------+