My Python history is mostly a story of “it worked yesterday.” Not because Python is bad, but because my habits were bad. In the grid, I run a lot of small experiments: embedding scripts, RAG prototypes, tool calling glue, quick APIs. That is a perfect recipe for environment drift.
Eventually I got tired of debugging dependency issues when I actually wanted to debug my own code. In my lab, uv and ruff helped me get back to a clean, repeatable workflow. This is not a sponsored post. It is just the first combo that reduced my daily friction.
what I wanted from my tooling
- fast setup: new project to running script without a ceremony.
- repeatability: same dependencies on my laptop and on a VM.
- boring automation: lint and format should be one command.
- less yak shaving: fewer toolchains stacked on toolchains.
uv in practice (how I use it)
The main win for me is that uv encourages a clean project boundary. In my lab, each script that matters becomes a tiny project. That means it has:
- a virtual environment tied to the repo,
- a pinned dependency set (as much as is reasonable),
- a single “how to run it” command in the README.
example: starting a small project
# example: new project setup (shape)
mkdir -p rag-sandbox
cd rag-sandbox
# create and sync an environment
uv venv
uv pip install -U pip
uv pip install fastapi pydantic
# freeze a requirements file (one simple approach)
uv pip freeze > requirements.txt
The exact uv workflow can vary. The pattern I care about is: a single command that produces a working, isolated environment. When I deploy to a VM, I want the same dependencies, not a surprise upgrade.
ruff: one tool to keep the codebase tidy
Ruff is the only lint tool that made me stop arguing with my own style. It is fast enough that I actually run it. In my lab, speed matters because I am impatient and I context-switch a lot.
The point is not “perfect code.” The point is that my scripts stay readable six months later. That is an ops concern, not an aesthetic concern.
example: a minimal ruff configuration
# example: pyproject.toml snippet
[tool.ruff]
line-length = 100
[tool.ruff.lint]
select = ["E", "F", "I"]
[tool.ruff.format]
quote-style = "double"
I keep it simple. I am not trying to build a corporate policy. I just want obvious mistakes caught early.
how this helped my LLM experiments
A lot of “AI tooling” is Python. If your Python environment is unstable, everything above it is unstable. In my lab, the biggest quality-of-life improvement was: I could rebuild a VM and be back where I started without guessing.
The second improvement was collaboration. Even if it is just me on two machines, a pinned environment and a formatter reduces stupid differences.
pinning and packaging: boring, but it keeps me moving
The biggest shift for me was accepting that “I will remember how to recreate this” is a lie. In my lab I try to pin dependencies at least at the major-minor level, and I write down the one command that creates the environment.
When a project graduates from experiment to service, I also add a minimal entrypoint.
Even a tiny python -m app convention reduces “how do I run this again” confusion.
my minimal automation loop
I like having one command that formats, lints, and runs a quick smoke test. That makes it easy to run before committing, and it makes it easy to run on a VM. I do not need a full CI system for every repo, but I do need a ritual.
example: a tiny Makefile
# example: Makefile for a small Python repo
.PHONY: fmt lint test
fmt:
uv run ruff format .
lint:
uv run ruff check .
test:
uv run python -m compileall -q .
The “test” here is intentionally weak. It catches syntax errors and obvious breakage. For a homelab script, that is often enough to prevent embarrassing failures.
deployment to the grid (the part that used to hurt)
A lot of my Python code ends up running on a Proxmox VM. The painful part used to be copying files over, setting up a venv by hand, and hoping I did not miss a dependency.
With a pinned requirements file and a predictable uv workflow, the deploy steps are boring: sync repo, create env, install, run. Boring is good.
the part i wish i did earlier: lockfiles and a single source of truth
The moment a repo matters, I want exactly one place to look for “what does this need”.
In practice that means pyproject.toml plus a lockfile generated by the tool.
I am not claiming it is perfect, but it is dramatically better than a pile of half-updated notes.
I also try to keep a short README section that answers three questions: how to create the environment, how to run the thing, and how to run the checks. That sounds obvious, but I used to skip it and then lose an hour on the next machine.
example: a tiny pyproject + uv workflow
# example: commands I keep in README
uv sync
uv run python -m app
uv run ruff check .
uv run ruff format .
When this works, it makes Python feel less like a haunted house. On a new VM I can usually go from “fresh install” to “running service” with a small, repeatable sequence.
what worked / what broke
what worked
- One repo per experiment: fewer global installs, less cross-contamination.
- Fast lint + format: I actually run the tools because they do not waste my time.
- Documented “run” commands: future me can reproduce a result.
what broke
- Mixing system Python with env Python: I still occasionally trip on this when I rush.
- Unpinned transitive deps: if you do not pin, the universe will pin for you, badly.
- Over-configuring lint: too many rules becomes noise and I stop listening.
closing thought
My goal is not to worship tools. My goal is to reduce the number of times I have to re-learn the same lesson. In the grid, reliability is a feature. uv and ruff helped me buy a little more of it.