Labgrid exporter systemd service
Each hw-node (bq, nuc, nemo, …) runs a
labgrid-exporter process that publishes its local hardware to the
coordinator at 10.0.0.41:20408. We run it as a systemd service
under a single host-wide convention so every node looks the same.
Convention
One exporter per host. All paths are fixed:
Path |
Purpose |
|---|---|
|
The exporter resource config. Owned by root, world-readable. Edit this file to change what the exporter publishes. |
|
Service knobs: coordinator address, instance name, yaml path,
|
|
The unit. The |
The unit is not templated — there is exactly one
labgrid-exporter.service per host. If you need to expose more
hardware, add it to /etc/labgrid/exporter.yaml rather than spinning
up a second instance.
Installation
There are two scripts in scripts/labgrid-exporter/:
install.sh— the per-node installer. Generic; takes a yaml path plus options.deploy.sh— orchestrator that pushes the installer to one or more hosts and runs it on each. Two modes:Manifest mode (default) — host list comes from a manifest file (
nodes.confnext to the script, gitignored). Useful for the routine “deploy to my lab” workflow.Ad-hoc mode — single host specified on the command line via
--node/--yaml/--bin. Useful for one-off deploys before you’ve added the host to a manifest.
Ad-hoc mode (no manifest needed)
./scripts/labgrid-exporter/deploy.sh \
--node mini2 \
--yaml /home/me/lg_mini2_exporter.yaml \
--bin /home/me/.local/bin/labgrid-exporter
Anything after a -- separator is forwarded to install.sh as
extra arguments — handy for a one-off --coordinator override:
./scripts/labgrid-exporter/deploy.sh \
--node mini2 --yaml /home/me/exporter.yaml \
-- --coordinator 10.0.0.41:20408
Manifest mode — first-time setup
Copy the example manifest and edit it for your lab:
cd scripts/labgrid-exporter
cp nodes.conf.example nodes.conf
$EDITOR nodes.conf
Manifest format (pipe-separated, # comments allowed):
# host | yaml-path-on-host | extra-install.sh-args
bq | /home/me/lg_bq_exporter.yaml | --bin /home/me/.local/bin/labgrid-exporter
nuc | /home/me/lg_nuc_exporter.yaml |
nemo | /home/me/lg_nemo_exporter.yaml | --bin /home/me/venv/bin/labgrid-exporter
The third column forwards extra flags to install.sh. The most
common one is --bin <path> when labgrid-exporter isn’t on the
sudo login PATH (e.g. installed via uv tool to ~/.local/bin
or inside a venv). deploy.sh always passes
--stop-manual --force-yaml.
Manifest mode — routine deploy
# From the repo root:
./scripts/labgrid-exporter/deploy.sh # every host in the manifest
./scripts/labgrid-exporter/deploy.sh bq nemo # subset (must be in manifest)
./scripts/labgrid-exporter/deploy.sh --dry-run # print only
# Use a different manifest:
./scripts/labgrid-exporter/deploy.sh --manifest /path/to/other.conf
LG_DEPLOY_MANIFEST=/path/to/other.conf ./scripts/labgrid-exporter/deploy.sh
Each host prompts once for the sudo password. deploy.sh errors
out cleanly if a selected host isn’t in the manifest, so a typo in
the host name is caught instead of being a silent no-op.
One-off install on a fresh node (without going through deploy.sh):
# On your workstation — push the script + unit to the node:
rsync -av scripts/labgrid-exporter/ <node>:~/labgrid-exporter-install/
# On the node:
ssh -t <node> 'sudo ~/labgrid-exporter-install/install.sh \
/path/to/your-exporter.yaml --stop-manual'
The installer:
Stops any leftover templated
labgrid-exporter@*instances from the prior naming scheme (idempotent — does nothing if none exist).With
--stop-manual, kills any manually launchedlabgrid-exporterprocess running as the service user.Copies the source yaml to
/etc/labgrid/exporter.yaml(creating/etc/labgrid/first). If the file already exists with different content, the install fails unless you also pass--force-yaml, which keeps a timestamped.bakcopy.Writes
/etc/systemd/system/labgrid-exporter.servicefrom the template, baking inUser=(defaults to$SUDO_USER).Writes
/etc/default/labgrid-exporterwithLG_COORDINATOR,LG_EXPORTER_NAME,LG_EXPORTER_YAML, andPATH.Runs
systemctl daemon-reloadandenable --now(skip with--no-start).
Concrete examples — the three current production nodes:
# bq
ssh -t bq 'sudo ~/labgrid-exporter-install/install.sh \
/home/tcollins/dev/dt-fix/lg_adrv9371_zc706_tftp_exporter.yaml \
--stop-manual'
# nuc
ssh -t nuc 'sudo ~/labgrid-exporter-install/install.sh \
/home/tcollins/dev/lg-coordinator/lg_fmcdaq3_vcu118_exporter.yaml \
--stop-manual'
# nemo
ssh -t nemo 'sudo ~/labgrid-exporter-install/install.sh \
/home/tcollins/lg_adrv9009_zc706_tftp_exporter.yaml \
--stop-manual'
Installer options
Option |
Meaning |
|---|---|
|
Instance name registered with the coordinator. Default is
|
|
|
|
Service runs as this user. Default |
|
Path to |
|
Prepend DIR to the service |
|
Install files but don’t |
|
Overwrite an existing |
|
Kill any manually launched |
Day-to-day operation
# Edit the resource config and reload:
sudo $EDITOR /etc/labgrid/exporter.yaml
sudo systemctl restart labgrid-exporter
# Live logs:
journalctl -u labgrid-exporter -f
# Status:
systemctl status labgrid-exporter
# Verify the place(s) registered with the coordinator (from any host):
labgrid-client -x 10.0.0.41:20408 places
Restart=on-failure brings the exporter back after a crash, and the
unit’s WantedBy=multi-user.target makes it start automatically
after a reboot. Re-running install.sh is idempotent — safe to use
to pick up a new coordinator address, binary path, or yaml file
(--force-yaml if the yaml content has changed).
Migrating from the older templated scheme
Earlier nodes used labgrid-exporter@<place>.service with the yaml
referenced in-place from a user home directory. The new installer
detects and disables any leftover templated instances automatically,
so the migration is a single install.sh invocation per node — no
manual cleanup of the old unit needed.