Installation & Setup Guide
The definitive guide to getting RouteRTL running on your machine. Whether you're on Linux, Windows (WSL2), or macOS — this page covers everything from first install to a verified, working environment.
The entire setup takes about 5 minutes. At the end you'll have a working
routertlCLI and a verified development environment.
1. Install RouteRTL
pip install routertl
That's it. This installs the routertl CLI (and its short alias rr)
into your Python environment. You now have access to every SDK command.
What just happened?
pipdownloaded the RouteRTL package from PyPI and installed it into your current Python environment. The package includes the CLI tools, the Cocotb simulation framework, protocol drivers (UART, SPI, I2C, AXI4-Lite), and the project scaffolding system.If the term "pip" is unfamiliar: it's Python's package manager — the equivalent of
apton Ubuntu orbrewon macOS, but for Python libraries and tools.
For SDK contributors
If you're contributing to the SDK itself (not just using it), clone the repository and install in editable mode:
git clone https://github.com/djmazure/routertl.git
cd routertl
pip install -e .
This links the CLI to your local source tree so changes take effect immediately.
2. Create Your First Project
mkdir axi_sensor_hub && cd axi_sensor_hub
git init
pip install routertl
routertl init
The routertl init wizard walks you through project setup interactively.
It will ask for your project name, FPGA vendor and part, HDL language,
source directory, simulation directory, constraints directory,
and CI/CD platform.
For non-interactive setup (e.g. in scripts), use a template:
routertl init --template blinky --name axi_sensor_hub
The source and simulation directory prompts are especially important:
they tell RouteRTL where to find your RTL files and testbenches. Whenever
you add new files, run routertl workspace update --src (or --sim,
or --all) and RouteRTL will auto-discover them and update project.yml
for you — no manual file-list editing required.
What routertl init creates
| File / Directory | Purpose |
|---|---|
project.yml | The true entry point. Every RouteRTL command reads this file. It defines your top module, FPGA target, source paths, simulation config, hooks, and CI/CD settings. |
Makefile | Includes the SDK's Common.mk build system. Provides targets like make install, make linting, make bitstream. |
.venv/ | A Python virtual environment with the RouteRTL CLI and all dependencies pre-installed. Keeps your system Python clean. |
requirements.txt | Python packages for your project: cocotb>=2.0.0, cocotb-test, pytest, pandas, PyYAML. |
pyproject.toml | Python project metadata with Ruff linter configuration for your testbenches. |
project_regbank.yml | Starter register bank definition with system registers (Git hash, version, build date). |
VERSION | Firmware version file (system_major=0, system_minor=1), used during synthesis for build tagging. |
.gitignore | Pre-configured to ignore build artifacts, simulation outputs, and the .venv/ directory. |
src/ | RTL source directory with pkg/ (packages) and regbank/ (register bank hex stubs) subdirectories. |
sim/cocotb/tests/ | Where your Cocotb testbenches live. |
xdc/ | Constraint files (XDC for Xilinx, SDC for Intel/Lattice). |
verif/ | Formal verification directory. |
Understanding project.yml
This is the file that drives everything. Here's the structure:
# Project identity
project:
name: axi_sensor_hub
top_module: axi_sensor_hub_top
top_sim: tb_axi_sensor_hub_top
# FPGA target
hardware:
vendor: xilinx
part: "xc7z020clg400-1"
language: vhdl
# Simulation configuration
simulation:
test_dir: sim/cocotb/tests
# Build tool versions
build_options:
tool_version: "2024.1"
# Source file lists — auto-populated by `routertl workspace update`
# After adding new RTL or test files, run:
# routertl workspace update --src (synthesis sources)
# routertl workspace update --sim (testbenches)
# routertl workspace update --all (everything at once)
sources:
syn: []
sim: []
xdc: []
# Pre-commit hooks — gate commits on lint + test passing
hooks:
pre_commit:
enabled: true
lint: true
check_yaml: true
tests: []
# CI/CD (GitHub Actions, GitLab CI, Bitbucket Pipelines)
cicd:
platform: github
tests: []
# Custom directory paths (defaults: src, sim, xdc, verif)
# paths:
# sources: src
# simulation: sim
# constraints: xdc
# verification: verif
# Machine-specific environment (injected into .venv/bin/activate)
# environment:
# LM_LICENSE_FILE: ~/Downloads/LR-291374_License.dat
# QUARTUS_ROOTDIR: /opt/altera_std
For the complete field reference with types, defaults, and export mappings, see the project.yml Reference.
EDA tool paths and license files — add an
environment:section toproject.ymland RouteRTL will automatically injectexportlines into.venv/bin/activate. This keeps machine-specific paths tied to the project instead of cluttering your shell profile.rr ws installandrr ws update-configboth refresh this block.
Every routertl command reads project.yml to know your project
structure. When you run routertl sim, it knows where your testbenches
are. When you run routertl bitstream, it knows your FPGA part and
source files.
How RouteRTL finds your EDA tools
RouteRTL auto-detects installed EDA tools — you don't need to
configure anything in project.yml if the tools are already on your
PATH.
The discovery chain (in priority order):
hardware.tool_pathinproject.yml— explicit path override- System
PATH—which vivado,which quartus_sh, etc. - Docker containers —
rr docker shell simprovides all tools
Run routertl doctor to see what was detected. If Vivado, Quartus,
or another vendor tool shows green, you're ready to synthesize.
For multi-version setups (e.g. Vivado 2024.1 and 2024.2 both
installed), pin the version in project.yml:
build_options:
tool_version: "2024.1" # Default for all tools
vivado_version: "2024.2" # Override for Vivado specifically
You do not need vendor tools to simulate, lint, or explore your design. The open-source toolchain (NVC + GHDL) is enough for the full verification workflow. Vendor tools are only needed when you're ready to synthesize a bitstream.
3. Activate and Verify
# Activate the virtual environment
source .venv/bin/activate
# Verify your setup
routertl doctor
routertl doctor scans your system and reports what's installed:
[✅] Python 3.10.12 [✅] make (GNU Make 4.3) [✅] git 2.34.1 [✅] nvc 1.18.2 [✅] ghdl 6.0.0 [⚠️] vivado — not found (optional) [⚠️] quartus_sh — not found (optional) [✅] project.yml is valid
Green ✅ means ready. Yellow ⚠️ means an optional tool is missing — you can still work, but some features won't be available (see What's Optional below).
If
routertl doctorreports a required tool as missing (Python, Make, Git, or a simulator), install it using the platform instructions in Section 4 before continuing.
4. Shell Completion
RouteRTL supports tab completion for bash, zsh, and fish.
Both routertl and the short alias rr are covered — a single
eval activates completion for both names.
Quick activation (current session)
# bash
eval "$(rr completion bash)"
# zsh
eval "$(rr completion zsh)"
# fish
rr completion fish | source
Persistent setup
Add the eval line to your shell profile so completion survives new
terminal sessions:
# bash — add to ~/.bashrc
echo 'eval "$(rr completion bash)"' >> ~/.bashrc
# zsh — add to ~/.zshrc
echo 'eval "$(rr completion zsh)"' >> ~/.zshrc
# fish — add to config
rr completion fish > ~/.config/fish/completions/rr.fish
If you used
rr initorrr ws installto set up your project, completion is already hooked into.venv/bin/activate— it activates automatically when yousource .venv/bin/activate. The persistent setup above is only needed for system-wide completion outside a project venv.
What you get
| Input | Result |
|---|---|
rr ⇥ | All top-level commands (sim, synth, docker, doctor…) |
rr synth ⇥ | Subcommands: all, clear, discover, run, status |
rr docker ⇥ | Subcommands: add-device, install, shell, setup… |
rr sim ⇥ | Discovered testbench names (dynamic completion) |
5. Platform-Specific Setup
These instructions cover installing the system-level prerequisites
that RouteRTL needs. Run routertl doctor after installing to confirm
everything is detected.
Linux (Ubuntu 22.04 / 24.04)
Core tools
sudo apt update
sudo apt install -y python3 python3-pip python3-venv git build-essential
NVC (VHDL simulator)
NVC is the recommended simulator for Cocotb 2.0+ testbenches.
Ubuntu 24.04+ — available in apt:
sudo apt install -y nvc
Ubuntu 22.04 — build from source (not in apt repos):
sudo apt install -y automake flex llvm-dev pkg-config zlib1g-dev libdw-dev libffi-dev
git clone https://github.com/nickg/nvc.git
cd nvc
git checkout r1.18.2
./autogen.sh
mkdir build && cd build
../configure
make -j$(nproc)
sudo make install
GHDL (VHDL analyzer for smart linting)
GHDL 6.0.0+ is required. Build from the development branch:
sudo apt install -y gnat llvm-dev
git clone https://github.com/ghdl/ghdl.git
cd ghdl
mkdir build && cd build
../configure --with-llvm-config
make -j$(nproc)
sudo make install
Windows (WSL2)
RouteRTL runs natively inside WSL2. If you don't have WSL2 yet:
1. Enable WSL2 (run in PowerShell as Administrator):
wsl --install -d Ubuntu-24.04
Restart your machine, then open the Ubuntu terminal.
2. One-liner bootstrap (installs everything RouteRTL needs):
curl -fsSL https://raw.githubusercontent.com/djmazure/routertl/main/tools/scripts/wsl_bootstrap.sh | bash
Or if you already have the SDK cloned locally:
bash vendor/routertl/tools/scripts/wsl_bootstrap.sh
This installs Git, Git LFS, Python 3, NVC, GHDL, Verilator (24.04+),
Icarus Verilog, and the RouteRTL CLI — then runs routertl doctor to verify. The script
is idempotent and supports --dry-run to preview actions.
If you prefer to install dependencies manually, skip this step and follow the Linux instructions below.
3. Install Docker Desktop (optional, for containerized EDA tools):
- Download Docker Desktop for Windows
- In Settings → Resources → WSL Integration, enable your Ubuntu distro
4. Follow the Linux instructions above inside your WSL2 terminal.
macOS (Homebrew)
# Core tools
brew install python@3.12 git make
# NVC simulator
brew install nvc
# GHDL (from source — no Homebrew formula)
brew install gnat llvm
git clone https://github.com/ghdl/ghdl.git
cd ghdl && mkdir build && cd build
../configure --with-llvm-config=$(brew --prefix llvm)/bin/llvm-config
make -j$(sysctl -n hw.ncpu)
sudo make install
macOS ships with an older
make(3.81). The Homebrew version installs asgmake. RouteRTL will work with either, butgmakeis recommended for parallel builds.
6. What's Optional (and What You Lose)
| Tool | Required? | What it enables | Without it |
|---|---|---|---|
| Python 3.10+ | ✅ Yes | CLI, simulation, all tooling | Nothing works |
| Git | ✅ Yes | Version control, submodule management, pre-commit hooks | Cannot initialize projects |
| Make | ✅ Yes | Build system orchestration | Cannot use make targets |
| NVC | ✅ Yes | VHDL simulation with Cocotb 2.0+ | Cannot run simulations |
| GHDL | ✅ Yes | Smart linting (hierarchy-aware VHDL analysis) | Cannot lint VHDL designs |
| Vivado | Optional | Xilinx FPGA synthesis & implementation | Can simulate and lint, but cannot synthesize for Xilinx |
| Quartus | Optional | Intel/Altera FPGA synthesis & implementation | Can simulate and lint, but cannot synthesize for Intel |
| Radiant | Optional | Lattice FPGA synthesis & implementation | Can simulate and lint, but cannot synthesize for Lattice |
| Libero SoC | Optional | Microchip FPGA synthesis & implementation | Can simulate and lint, but cannot synthesize for Microchip |
| Verilator | Optional | Verilog/SystemVerilog simulation | Use NVC for VHDL; Verilator adds SV simulation support |
| Icarus Verilog | Optional | Verilog simulation | sudo apt install iverilog — zero-cost open-source alternative to Verilator |
| Docker | Optional | Deterministic EDA environments (routertl docker shell sim) | Must install tools natively |
| Yosys + netlistsvg | Optional | Schematic generation (routertl schematic) | Cannot generate SVG schematics |
7. Troubleshooting
routertl: command not found
Your shell can't find the CLI. Common causes:
- Virtual environment not activated: Run
source .venv/bin/activate - pip installed to a different Python: Ensure you're using the same
python3/pippair. Trypython3 -m pip install routertl
No module named 'venv'
On Ubuntu/Debian, the venv module is a separate package:
sudo apt install python3-venv
routertl doctor shows NVC/GHDL not found
The tool is installed but not on your PATH. Verify:
which nvc # Should print /usr/local/bin/nvc or similar
which ghdl # Should print /usr/local/bin/ghdl or similar
If installed from source to /usr/local/bin/, ensure that directory is
in your PATH:
export PATH="/usr/local/bin:$PATH"
Add this line to your ~/.bashrc or ~/.zshrc to make it permanent.
pip install routertl fails with permission errors
Never use sudo pip install. Instead, use a virtual environment or
the --user flag:
pip install --user routertl
Or better, let routertl init manage the virtual environment for you.
Docker Quartus: "No device families detected"
After rr docker shell quartus shows Quartus as installed but no device
families, devices were likely not installed. Common causes:
-
No
.qdzfile in installer path: Place the.qdzdevice package (from Intel FPGA Download Center) in the same directory as your Quartus installer. Check with:ls $QUARTUS_INSTALLER_PATH/*.qdz -
Add devices to an existing installation:
rr docker add-device quartus # Docker (named volume or bind mount) rr eda add-device quartus # Native (no Docker needed) -
Bind mount path precision: When using
QUARTUS_INSTALL_DIRin.env, it must point to the exact directory containing thequartus/subdirectory. For example, if Quartus is at/mnt/d/eda_volumes/quartus_std/quartus/bin/, setQUARTUS_INSTALL_DIR=/mnt/d/eda_volumes/quartus_std— not the parent. -
Volume vs bind mount:
rr docker statusshows both named Docker volumes and bind mount paths. If neither appears, runrr docker setupto configure your storage.
Next Steps
You're set up! Head to the First Steps Tutorial for a hands-on, 20-minute walkthrough that takes you from an empty directory to a passing simulation.