Labgrid exporter systemd service

Each hw-node (bq, nuc, nemo, …) runs a labgrid-exporter process that publishes its local hardware to the coordinator at 10.0.0.41:20408. We run it as a systemd service under a single host-wide convention so every node looks the same.

Convention

One exporter per host. All paths are fixed:

Path

Purpose

/etc/labgrid/exporter.yaml

The exporter resource config. Owned by root, world-readable. Edit this file to change what the exporter publishes.

/etc/default/labgrid-exporter

Service knobs: coordinator address, instance name, yaml path, PATH (for ser2net). Generated by the installer.

/etc/systemd/system/labgrid-exporter.service

The unit. The labgrid-exporter binary path is baked into ExecStart at install time (systemd does not expand environment variables in the executable position). The arguments come from the env file above.

The unit is not templated — there is exactly one labgrid-exporter.service per host. If you need to expose more hardware, add it to /etc/labgrid/exporter.yaml rather than spinning up a second instance.

Installation

There are two scripts in scripts/labgrid-exporter/:

  • install.sh — the per-node installer. Generic; takes a yaml path plus options.

  • deploy.sh — orchestrator that pushes the installer to one or more hosts and runs it on each. Two modes:

    • Manifest mode (default) — host list comes from a manifest file (nodes.conf next to the script, gitignored). Useful for the routine “deploy to my lab” workflow.

    • Ad-hoc mode — single host specified on the command line via --node/--yaml/--bin. Useful for one-off deploys before you’ve added the host to a manifest.

Ad-hoc mode (no manifest needed)

./scripts/labgrid-exporter/deploy.sh \
    --node mini2 \
    --yaml /home/me/lg_mini2_exporter.yaml \
    --bin  /home/me/.local/bin/labgrid-exporter

Anything after a -- separator is forwarded to install.sh as extra arguments — handy for a one-off --coordinator override:

./scripts/labgrid-exporter/deploy.sh \
    --node mini2 --yaml /home/me/exporter.yaml \
    -- --coordinator 10.0.0.41:20408

Manifest mode — first-time setup

Copy the example manifest and edit it for your lab:

cd scripts/labgrid-exporter
cp nodes.conf.example nodes.conf
$EDITOR nodes.conf

Manifest format (pipe-separated, # comments allowed):

# host | yaml-path-on-host | extra-install.sh-args
bq   | /home/me/lg_bq_exporter.yaml   | --bin /home/me/.local/bin/labgrid-exporter
nuc  | /home/me/lg_nuc_exporter.yaml  |
nemo | /home/me/lg_nemo_exporter.yaml | --bin /home/me/venv/bin/labgrid-exporter

The third column forwards extra flags to install.sh. The most common one is --bin <path> when labgrid-exporter isn’t on the sudo login PATH (e.g. installed via uv tool to ~/.local/bin or inside a venv). deploy.sh always passes --stop-manual --force-yaml.

Manifest mode — routine deploy

# From the repo root:
./scripts/labgrid-exporter/deploy.sh             # every host in the manifest
./scripts/labgrid-exporter/deploy.sh bq nemo     # subset (must be in manifest)
./scripts/labgrid-exporter/deploy.sh --dry-run   # print only

# Use a different manifest:
./scripts/labgrid-exporter/deploy.sh --manifest /path/to/other.conf
LG_DEPLOY_MANIFEST=/path/to/other.conf ./scripts/labgrid-exporter/deploy.sh

Each host prompts once for the sudo password. deploy.sh errors out cleanly if a selected host isn’t in the manifest, so a typo in the host name is caught instead of being a silent no-op.

One-off install on a fresh node (without going through deploy.sh):

# On your workstation — push the script + unit to the node:
rsync -av scripts/labgrid-exporter/ <node>:~/labgrid-exporter-install/

# On the node:
ssh -t <node> 'sudo ~/labgrid-exporter-install/install.sh \
    /path/to/your-exporter.yaml --stop-manual'

The installer:

  1. Stops any leftover templated labgrid-exporter@* instances from the prior naming scheme (idempotent — does nothing if none exist).

  2. With --stop-manual, kills any manually launched labgrid-exporter process running as the service user.

  3. Copies the source yaml to /etc/labgrid/exporter.yaml (creating /etc/labgrid/ first). If the file already exists with different content, the install fails unless you also pass --force-yaml, which keeps a timestamped .bak copy.

  4. Writes /etc/systemd/system/labgrid-exporter.service from the template, baking in User= (defaults to $SUDO_USER).

  5. Writes /etc/default/labgrid-exporter with LG_COORDINATOR, LG_EXPORTER_NAME, LG_EXPORTER_YAML, and PATH.

  6. Runs systemctl daemon-reload and enable --now (skip with --no-start).

Concrete examples — the three current production nodes:

# bq
ssh -t bq 'sudo ~/labgrid-exporter-install/install.sh \
    /home/tcollins/dev/dt-fix/lg_adrv9371_zc706_tftp_exporter.yaml \
    --stop-manual'

# nuc
ssh -t nuc 'sudo ~/labgrid-exporter-install/install.sh \
    /home/tcollins/dev/lg-coordinator/lg_fmcdaq3_vcu118_exporter.yaml \
    --stop-manual'

# nemo
ssh -t nemo 'sudo ~/labgrid-exporter-install/install.sh \
    /home/tcollins/lg_adrv9009_zc706_tftp_exporter.yaml \
    --stop-manual'

Installer options

Option

Meaning

--name NAME

Instance name registered with the coordinator. Default is hostname -s.

--coordinator ADDR

host:port of the coordinator. Default 10.0.0.41:20408.

--user USER

Service runs as this user. Default $SUDO_USER.

--bin PATH

Path to labgrid-exporter. Default: auto-detect on the service user’s PATH (picks up ~/.local/share/uv/tools/labgrid/bin/labgrid-exporter from uv tool install labgrid).

--ser2net-path DIR

Prepend DIR to the service PATH so the unit finds a custom ser2net (e.g. $HOME/opt/ser2net-4.6.1/sbin).

--no-start

Install files but don’t enable --now.

--force-yaml

Overwrite an existing /etc/labgrid/exporter.yaml if it differs from the source. A timestamped .bak is kept.

--stop-manual

Kill any manually launched labgrid-exporter running as the service user before installing. Use this on first conversion from a manual setup.

Day-to-day operation

# Edit the resource config and reload:
sudo $EDITOR /etc/labgrid/exporter.yaml
sudo systemctl restart labgrid-exporter

# Live logs:
journalctl -u labgrid-exporter -f

# Status:
systemctl status labgrid-exporter

# Verify the place(s) registered with the coordinator (from any host):
labgrid-client -x 10.0.0.41:20408 places

Restart=on-failure brings the exporter back after a crash, and the unit’s WantedBy=multi-user.target makes it start automatically after a reboot. Re-running install.sh is idempotent — safe to use to pick up a new coordinator address, binary path, or yaml file (--force-yaml if the yaml content has changed).

Migrating from the older templated scheme

Earlier nodes used labgrid-exporter@<place>.service with the yaml referenced in-place from a user home directory. The new installer detects and disables any leftover templated instances automatically, so the migration is a single install.sh invocation per node — no manual cleanup of the old unit needed.