Packaging pipeline
Doctools has a continuous deployment integration pipeline that works as follows:
┌──────────────────┐
┌─►│Build Doc Latest ├─┐
│ └──────────────────┘ │
│ │
┌─────────────┐ │ ┌────────────────┐ │ ┌──────────────┐
│Build Package├─┼─►│Build Doc on Min├───┼─►│Deploy Package│
└─────────────┘ │ └────────────────┘ │ └──────────────┘
│ │
│ ┌──────────┐ │
├─►│Custom Doc├─────────┤
│ └──────────┘ │
│ │
│ ┌─────┐ │
└─►│Tests├──────────────┘
└─────┘
The Build Package step “compiles” JavaScript and SASS, fetches third-party assets and licenses and generates the Python package.
Then, in the middle-stage, two parallel runs are launched:
Build Doc Latest: uses the latest stable dependencies releases to generate this documentation, and store as an artifact.
Build Doc on Min: uses the minimum requirements dependencies to generate this documentation, but the output is discarded.
Custom Doc: calls Custom Doc to check if the CLI tool succeeds in generating a full custom PDF document.
Tests: run tests using
pytest, in special, methods that are not called during the Build Doc * pipelines.
Both of them are set to fail-on-warning during the documentation generation.
Finally, the Deploy Package:
Grabs the version and checks if the tag version already exists:
If so, set to update the symbolic
pre-releaserelease.If not, set to update the symbolic
latestandpre-relaserelease.
Still if a new version:
Create the git tag and push to origin.
Create the tagged release.
Upload the artifact to the tagged release.
Upload the artifact to the symbolic release (
pre-release,latest).Finally, the Build Doc Latest artifact is downloaded and deployed to the branch
gh-pages
By design, the live page on github.io follows the pre-release/latest commit-ish; properly versioned live documentation should be managed by an external system that watches the git tags (e.g. readthedocs).
This approach allows to have a single defined version on adi_doctools/__init__.py,
and have the tags created and releases created/updated without much fuzz.
The philosophy is to have latest updated on tag increment and first
successful run, and pre-relese updated on successful run without tag change.
These releases exist to provide a pointer to the latest/pre-release packages, e.g.
releases/download/latest/adi-doctools.tar.gz.
Non-handled corner-cases mitigations:
Release
pre-releaseandlatestmust exist prior the first run.Branch
gh-pagesmust exist with at least one commit.
Configure podman
Below are suggested instructions for setting up podman on a Linux environment.
Adjust to your preference as needed, and skip the steps marked in green if not using WSL2.
Install podman from your package manager.
Ensure cgroup v2 on wsl2’s .wslconfig:
[wsl2]
kernelCommandLine = cgroup_no_v1=all systemd.unified_cgroup_hierarchy=1
Restart wsl2.
Enable podman service for your user.
~$
systemctl enable --now --user podman.socket
~$
systemctl start --user podman.socket
Set the DOCKER_HOST variable on your ~/.bashrc:
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
Build the container image
To build the container image, use your favorite container engine:
~$
cd ~/doctools
~/doctools$
podman build --tag adi/doctools:v1 ci
Interactive run
At its core, the workflows are straight forward, roughly they do:
The Tests step:
~$
cd tests ; pytest
Build Doc *:
~$
cd docs ; make html
But at a specific minimum and maximum supported environment version.
Custom Doc:
~$
mkdir /tmp/test-pdf ; cd $_
/tmp/test-pdf$
adoc custom-doc ; adoc custom-doc
Doing the relevant step on host covers most issues that the CI would catch.
You can use the container image with this suggested bash method to interactive login into an image, mounting the provided path, to run the steps on the container, for example:
~/doctools$
pdr adi/doctools:v1 .
~/doctools$
python3.13 -m venv venv
~/doctools$
source venv/bin/activate ; \
pip3.13 install -e . ; \
pip3.13 install pytest
~/doctools$
cd tests ; pytest
~/doctools/tests$
exit
Full local run
To have a full continuous integration mock-run act
can be used.
act is a CLI written in go that allows to run GitHub actions.
Assuming you have the tools necessary already installed (a general guide
is provided here and already built the image.
Install act binary into an executable path:
~$
cd ~/.local
~/.local$
curl --proto '=https' --tlsv1.2 -sSf \
https://raw.githubusercontent.com/nektos/act/master/install.sh | \
sudo bash
~/.local$
act --version
act version 0.2.74
Now, run your continuous integration:
~/doctools$
act --remote-name private
INFO[0000] Using docker host 'unix:///run/user/1000//podman/podman.sock',
and daemon socket 'unix:///run/user/1000//podman/podman.sock'
INFO[0000] Start server on http://10.44.3.54:34567
[build/build-kernel.yml/build] ⭐ Run Set up job
[...]
Update private with your preferred origin name (does nothing beyond suppressing warnings).
Caution
Even with pull_request event type, no rebasing is done on the mock run.
Rebase on your side before running act.
Additional arguments are added from the .actrc on invoke.
To run a specific workflow, use -W, e.g.:
~/doctools$
act pull_request --remote-name public \
-W .github/workflows/build-kernel.yml
By default, it will run on the checks on the top 5 commits. The snippet below will explicitly set the base and head of the desired commits:
~$
base=@~15 ; head=@ ; \
jq -n --arg base "$base" --arg head "$head" \
'{"act": true,
"pull_request": { "base": { "sha": $base }, "head": { "sha": $head } }}' \
| tee ci/act-event.json
~$
act --remote-name public
In the example it takes 14 commits from the current HEAD. Please note that this does not change the checkout, just the commit range the checkers run on. It is useful for filter out “wip” commits, for example.
Self-hosted runner
To host your GitHub Actions Runner, set-up your secrets:
~$
# e.g. analogdevicesinc/doctools
~$
printf ORG_REPOSITORY | podman secret create adi_doctools_org_repository -
~$
# e.g. MyVerYSecRunnerToken
~$
printf RUNNER_TOKEN | podman secret create adi_doctools_runner_token -
Attention
If github_token from Self-hosted cluster is set, the runner_token
is ignored and a new is requested.
~/doctools$
podman run \
--secret adi_doctools_org_repository,target=/run/secrets/org_repository,uid=1 \
--secret adi_doctools_runner_token,target=/run/secrets/runner_token,uid=1 \
adi/doctools:v1
Self-hosted cluster
To host a cluster of self-hosted runners, the recommended approach is to use systemd services, instead of for example, podman-compose.
Below is a suggested systemd service at ~/.config/systemd/user/podman-doctools@.service.
[Unit]
Description=Podman adi_doctools ci %i
Wants=network-online.target
After=network-online.target
[Service]
Restart=on-failure
ExecStartPre=/usr/bin/rm -f /%t/%n-pid /%t/%n-cid
ExecStart=/usr/bin/podman run \
--secret adi_doctools_org_repository,target=/run/secrets/org_repository,uid=1 \
--secret adi_doctools_runner_token,target=/run/secrets/runner_token,uid=1 \
--conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid -d adi/doctools:v1 top
ExecStop=/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`"
KillMode=none
Type=forking
PIDFile=/%t/%n-pid
[Install]
WantedBy=multi-user.target
Note
Instead of passing runner_token, you can also pass a github_token to generate the runner_token on demand.
~$
# e.g. MyVerYSecRetToken
~$
printf GITHUB_TOKEN | podman secret create adi_doctools_github_token -
Then update the systemd service,
Enable and start the service
systemctl --user enable podman-doctools@0.service
systemctl --user start podman-doctools@0.service
Attention
User services are terminated on logout, unless you define
loginctl enable-linger <your-user> first.