<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>thein3rovert - Posts &amp; Notes</title><description>An opinionated starter theme for Astro</description><link>https://blog.thein3rovert.dev/</link><language>en-GB</language><item><title>From 20 Minutes to 1 Minute: Optimizing NixOS Builds</title><link>https://blog.thein3rovert.dev/posts/nixos/from-20-minute-to-1-minute/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/from-20-minute-to-1-minute/</guid><description>How I achieved a 20x speedup in NixOS build times by moving from GitHub&apos;s hosted runners to a self-hosted setup</description><pubDate>Sun, 15 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;./images/nixos-build.png&quot; alt=&quot;alt text&quot;&gt;&lt;/p&gt;
&lt;p&gt;I wrote about &lt;a href=&quot;link-to-previous-post&quot;&gt;building a self-hosted Terraform CI pipeline&lt;/a&gt;. The main aim of that is to deploy my terraform infrastuture changes automatically on push to default branch, which works pretty well..saves me alot of time. However the main aim for me wasn&amp;#39;t just. What I really wanted was to fix the painfully slow feedback loop in my NixOS builds. Using the default github runner provided by github the &lt;code&gt;ubuntu_latest&lt;/code&gt; my build time takes over Twenty minutes per build and seeing that i am building more than one host, the time taken per host were just not right. Today I&amp;#39;m running the same builds in under a minute.&lt;/p&gt;
&lt;h2&gt;GitHub&amp;#39;s Hosted Runners&lt;/h2&gt;
&lt;p&gt;My NixOS configurations were building on GitHub&amp;#39;s hosted runners and taking forever. Not just slow, but &amp;quot;go make coffee and hope it finishes&amp;quot; slow. Each build would take around 20 minutes, sometimes stretching past 27 minutes depending on what GitHub decided to give me that day. When you&amp;#39;re iterating on infrastructure changes, waiting half an hour to discover a typo means you&amp;#39;ve lost your flow state twice over.&lt;/p&gt;
&lt;p&gt;However, here is the reason, the slowness came from a few places. The fact that the GitHub&amp;#39;s runners start fresh every time, so there&amp;#39;s no persistent Nix store between builds. Every derivation gets built or downloaded from scratch per build and runs. Then there&amp;#39;s the network overhead of pulling from Cachix over the public internet. And finally, the runners themselves aren&amp;#39;t particularly fast machines.&lt;/p&gt;
&lt;p&gt;I already had the self-hosted runner set up as per my previous blog, it is the same runner i use for my Terraform deployment, sitting on my Proxmox cluster with direct access to my local network. It was time to put it to work on the real problem.&lt;/p&gt;
&lt;h2&gt;Moving NixOS Builds to Self-Hosted&lt;/h2&gt;
&lt;p&gt;The existing runner was already configured with most of what I needed: GitHub Actions, Tailscale for network access, all my host run within my tailnet, this helps to improve security hardening as we all know self hosted runner are not overly secure when we run them with public repo and basic tooling. My nest step was &lt;code&gt;NIX&lt;/code&gt; itself..since i dont want it to be downloaded during run time each time. I installed it using the Determinate Systems installer, which is what i&amp;#39;d recommend for anyone and it also when is been used widly in workflows:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;curl --proto &amp;#39;=https&amp;#39; --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This installer is better for CI systems than the official one. It sets up multi-user mode properly and integrates with systemd without requiring manual tweaking.&lt;/p&gt;
&lt;p&gt;The next issue was permissions, which i ran into. &lt;code&gt;Cachix&lt;/code&gt; needs to configure binary caches, which requires the user to be in Nix&amp;#39;s trusted users list. The Determinate installer creates a clean config structure with a custom configuration file for user modifications:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;echo &amp;quot;trusted-users = root runner&amp;quot; &amp;gt;&amp;gt; /etc/nix/nix.custom.conf
systemctl restart nix-daemon
systemctl restart actions.runner.thein3rovert-nixos-config.github-runner.service
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After that, I updated my build workflow to use the self-hosted runner. The changes were minimal since I already had the validation steps in place:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;build-nixos:
  runs-on: [self-hosted, terraform]
  needs: [validate-ssh, validate-cachix]
  steps:
    - name: Setup SSH
      uses: webfactory/ssh-agent@v0.9.0
      with:
        ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
    - name: Checkout
      uses: actions/checkout@main
      with:
        fetch-depth: 1
    - name: Install Nix
      uses: DeterminateSystems/nix-installer-action@main
    - name: Cachix
      uses: cachix/cachix-action@master
      with:
        authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
        name: thein3rovert
    - name: Build nixos
      run: nix build --accept-flake-config --print-out-paths .#nixosConfigurations.nixos.config.system.build.toplevel
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The only change is &lt;code&gt;runs-on: [self-hosted, terraform]&lt;/code&gt; instead of &lt;code&gt;runs-on: ubuntu-latest&lt;/code&gt;. One things i also learn is you can use self hosted runner for more than one build and manage them using labels. Everything else stayed the same. I kept the Nix installer action in the workflow even though Nix was already on the runner, since it handles some additional setup and doesn&amp;#39;t hurt to run.&lt;/p&gt;
&lt;h2&gt;Disk Space Issues&lt;/h2&gt;
&lt;p&gt;The first attempt failed to running the workflow failed when it starts building the third host &lt;code&gt;nixos&lt;/code&gt;, the other two host &lt;code&gt;bellay&lt;/code&gt; and &lt;code&gt;marcus&lt;/code&gt; were fine... this failed because i had set the runner to a 30GB disk thinking i&amp;#39;d be enough, which sounds reasonable however, nixos host been a very large management sever, it managed every other server so i expect it to have alot of packages and dependencies, so it ran out of space half way through.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;error: writing to file: No space left on device
error: Cannot build &amp;#39;/nix/store/24rbhg4di3sb33h9phpf0mzb7kzcjr80-nixos-system-nixos-26.05.20260304.80bdc1e.drv&amp;#39;.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;NixOS builds accumulate a lot of intermediate derivations. Each configuration shares some packages, but they each have their own closure that needs to be built or downloaded. Thirty gigabytes wasn&amp;#39;t enough.&lt;/p&gt;
&lt;p&gt;I already had the runner defined in Terraform, so fixing this was just updating the disk size:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;module &amp;quot;github_runner&amp;quot; {
  source = &amp;quot;../../modules/lxc&amp;quot;

  # ... other config ...
  disk_size = &amp;quot;80G&amp;quot;  # was 30G
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A quick &lt;code&gt;terraform apply&lt;/code&gt; later and the runner had enough space to handle all three configurations with room to spare.&lt;/p&gt;
&lt;h2&gt;The Finale..&lt;/h2&gt;
&lt;p&gt;First build after the disk resize: 6 minutes. Not the 1 minute I was hoping for, but a massive improvement over 20+ minutes and i assume thi is because it needed to cache new derivation from the previous failed builds. Also, the Nix store was being populated from Cachix over my local network instead of the public internet, which helped. The runner also had persistent storage, so anything built stayed around for subsequent runs.&lt;/p&gt;
&lt;p&gt;Second build: 1 minute and 30 seconds.&lt;/p&gt;
&lt;p&gt;Third build: 1 minute flat.&lt;/p&gt;
&lt;p&gt;That&amp;#39;s a 20x speedup compared to GitHub&amp;#39;s hosted runners. The first build still takes a few minutes because it&amp;#39;s populating the local Nix store, but every build after that is essentially instant. The persistent Nix store means most derivations are already present. When something does need to be fetched, it&amp;#39;s coming from Cachix over a low-latency local network connection instead of traversing the public internet.&lt;/p&gt;
&lt;p&gt;The difference in iteration speed is dramatic. Before, I&amp;#39;d push a change, go do something else, and come back to check if it worked. Now I push and immediately see results.&lt;/p&gt;
&lt;p&gt;A lil more on why this works, yea...the speedup comes from a few factors working together. The persistent Nix store is the biggest win. As i mentioned above the github hosted runners start fresh every time, so every package has to be downloaded or built, however this keep everything around between builds, so only changed derivations need work.&lt;/p&gt;
&lt;p&gt;Local network access matters more than I expected. Cachix is fast, but there&amp;#39;s still latency involved in pulling packages over the internet... in this case my runner sits on the same network as my infrastructure, so when it does need to fetch something, it&amp;#39;s happening at LAN speeds.&lt;/p&gt;
&lt;p&gt;Running on my own hardware means I control the resources. The LXC container has 2 cores and 4GB of RAM, which isn&amp;#39;t a lot, but it&amp;#39;s consistently available. GitHub&amp;#39;s runners vary in performance depending on what you get assigned. Consistency helps with predictable build times.&lt;/p&gt;
&lt;h2&gt;What&amp;#39;s Actually Running&lt;/h2&gt;
&lt;p&gt;I&amp;#39;m building three NixOS configurations in parallel: my main desktop (nixos), a server (marcus), and another machine (bellamy). Each build job runs independently on the same runner:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;strategy:
  matrix:
    config:
      - nixos
      - marcus
      - bellamy
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This matrix strategy means all three builds kick off simultaneously. Since they share most of their dependencies through the Nix store, the second and third builds benefit from whatever the first one already fetched. The whole workflow completes in whatever time the longest individual build takes, which is consistently around 1 minute after the initial cache population.&lt;/p&gt;
&lt;h2&gt;The Cost of Self-Hosting my own runner&lt;/h2&gt;
&lt;p&gt;I know i&amp;#39;ve discussed the benefit and pros, but one thing i personally believe is every good thing has a downside or costs, there are tradeoffs to running your own infrastructure. The runner needs maintenance. When something breaks, I&amp;#39;m the one who has to fix it. Compare to  GitHub&amp;#39;s hosted runners that just work, even if they&amp;#39;re slow.&lt;/p&gt;
&lt;p&gt;Another, NixOS builds are large, and you need to plan for that. Eighty gigabytes feels like overkill for a CI runner, but it&amp;#39;s necessary when you&amp;#39;re building multiple system configurations and i see my adding more host later on.&lt;/p&gt;
&lt;p&gt;Security is one other cons..yea. The runner has network access to my infrastructure and credentials to push to Cachix. It&amp;#39;s running on my network behind Tailscale, which helps, but it&amp;#39;s still a potential attack surface, comparing that to GitHub hosted runners which are ephemeral and isolated, which is safer by default.&lt;/p&gt;
&lt;p&gt;For my use case though, the tradeoffs are worth it.. and i&amp;#39;d spend more time hardening it. The speed improvement is substantial, and I was already running this infrastructure anyway. Adding a CI runner to an existing Proxmox cluster doesn&amp;#39;t add much operational overhead.&lt;/p&gt;
&lt;h2&gt;Future Improvements&lt;/h2&gt;
&lt;p&gt;Right now the runner just builds NixOS configurations. The next step is actually deploying them. I could have the runner push builds directly to machines instead of building locally on each host. That would let me deploy to multiple machines simultaneously without each one needing to build independently.&lt;/p&gt;
&lt;p&gt;I&amp;#39;m also thinking about adding build artifact caching beyond what Cachix provides. The runner could maintain a local binary cache that&amp;#39;s even faster than pulling from Cachix. For frequently rebuilt derivations, that might shave off a few more seconds.&lt;/p&gt;
&lt;p&gt;The current setup is good enough though. One minute builds are fast enough. When I push a change, I know immediately if it worked. That&amp;#39;s what I was after.&lt;/p&gt;
&lt;p&gt;If you&amp;#39;re running NixOS and getting frustrated with slow CI builds, setting up a self-hosted runner might be worth the effort. The initial setup takes some time, but the productivity gain is real. Twenty minutes to one minute is &lt;code&gt;GOAT&lt;/code&gt;...&lt;/p&gt;
&lt;p&gt;Thanks for reading...happy learning.&lt;/p&gt;
</content:encoded></item><item><title>Building a Self-Hosted GitHub Runner</title><link>https://blog.thein3rovert.dev/posts/nixos/building-a-self-hosted-github-runner/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/building-a-self-hosted-github-runner/</guid><description>Setting up a self-hosted GitHub Actions runner on Proxmox to speed up Terraform and NixOS builds</description><pubDate>Sat, 14 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The problem with most of my infrastructure projects is the feedback loop when i make a change, push it and have to wait 20 minutes for the build.. and then discover i have made a typo. I repeat this a few times and and realise i&amp;#39;ve burned an entire afternoon. I got introduced to github runner from someone at work place and I&amp;#39;ve decided it&amp;#39;s time i build mine.&lt;/p&gt;
&lt;p&gt;What I&amp;#39;m really after is building and deploying my NixOS infrastructure with fastee iteration cycles. But i have to start somewhere..first i&amp;#39;d like to use it to run my terraform infra workflow, you know the fmt, validate, plan and apply...after i&amp;#39;d want to use it for my nixos build for each of my server which as of today takes over 27min of build time.&lt;/p&gt;
&lt;h2&gt;The Runner Problem&lt;/h2&gt;
&lt;p&gt;As it&amp;#39;s my first time building a custom runner, i wasn&amp;#39;t to familiar with the specs i&amp;#39;d need or what is enough so i went with the following.&lt;br&gt;Image: Ubuntu 22.04&lt;br&gt;core: 2&lt;br&gt;Memory: 4gb&lt;br&gt;Disk size: 30gb&lt;br&gt;Network: Tailscale&lt;/p&gt;
&lt;p&gt;The reason why i am using tailscale is because i want to keep my runner within my tailnet, as all my server also run within my tailnet, exposing it over the internet is risky especially for self hosted github runner in a public repo and additionally, i have other services that will be needed by my workflow like &lt;code&gt;hasicorp vault&lt;/code&gt; that is running within my tailnet, i use it to store my secrets for terraform and othher things.&lt;/p&gt;
&lt;p&gt;I already had Terraform managing my LXC containers on Proxmox, so spinning up a dedicated GitHub Actions runner was straightforward. I added a new module to my dev environment:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-hcl&quot;&gt;# GitHub Actions Runner (Ubuntu 22.04)
module &amp;quot;github_runner&amp;quot; {
  source = &amp;quot;../../modules/lxc&amp;quot;

  target_node = var.target_node
  password    = var.root_password
  hostname    = &amp;quot;github-runner&amp;quot;
  vmid        = 120
  ostemplate  = &amp;quot;local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst&amp;quot;
  cores       = 2
  memory      = 4096
  swap        = 1024
  disk_size   = &amp;quot;30G&amp;quot;
  storage     = var.rootfs_storage
  ssh_keys    = file(var.ssh_public_key_path)

  gateway         = var.gateway
  cidr_suffix     = var.cidr_suffix
  ip_base         = var.ip_base
  bridge          = var.bridge
  container_id    = 120
  proxmox_host_ip = var.proxmox_host_ip
  os_type         = &amp;quot;ubuntu&amp;quot;
  extra_tags      = [&amp;quot;github-runner&amp;quot;, &amp;quot;ci&amp;quot;]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After applying that, I SSH&amp;#39;d into the container and set it up as a GitHub Actions runner. GitHub provides the download links and token when you add a new runner in your repository settings. The setup process looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Create a runner user and switch to it
useradd -m -s /bin/bash runner
su - runner

# Download and extract the runner
mkdir actions-runner &amp;amp;&amp;amp; cd actions-runner
curl -o actions-runner-linux-x64-2.332.0.tar.gz -L \
  https://github.com/actions/runner/releases/download/v2.332.0/actions-runner-linux-x64-2.332.0.tar.gz
tar xzf ./actions-runner-linux-x64-2.332.0.tar.gz

# Configure the runner
./config.sh --url https://github.com/YOUR_USER/nixos-config --token YOUR_TOKEN

# Exit back to root and install as a service
exit
cd /home/runner/actions-runner
sudo ./svc.sh install runner
sudo ./svc.sh start
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The important part is installing it as a systemd service so it survives reboots and runs automatically.&lt;/p&gt;
&lt;p&gt;Next i installed tailscale on the runner, along with a few other dependencies. Installing Tailscale is just &lt;code&gt;curl -fsSL https://tailscale.com/install.sh | sh&lt;/code&gt; followed by &lt;code&gt;tailscale up&lt;/code&gt;. For the other tools, I installed Node.js 20 (the GitHub runner needs this), Terraform, Vault CLI, unzip, and the usual bosses like curl and git.&lt;/p&gt;
&lt;p&gt;I didnt want to store the secrets directly on github. That works, but it means duplicating secrets across different systems. I already had everything in Vault: Proxmox credentials, S3 credentials for Terraform state, SSH keys. So duplication there wasnt just necessary. The workflow starts by pulling everything from Vault:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- name: Get S3 credentials from Vault
  id: vault
  run: |
    export VAULT_ADDR=&amp;quot;${{ secrets.VAULT_ADDR }}&amp;quot;
    export VAULT_TOKEN=&amp;quot;${{ secrets.VAULT_TOKEN }}&amp;quot;

    # Fetch S3 credentials from Vault
    S3_ACCESS_KEY=$(vault kv get -field=access_key_id secret/s3)
    S3_SECRET_KEY=$(vault kv get -field=secret_access_key secret/s3)

    # Fetch SSH public key from Vault
    SSH_PUB_KEY=$(vault kv get -field=ssh_public_keys secret/ssh)

    echo &amp;quot;::add-mask::$S3_ACCESS_KEY&amp;quot;
    echo &amp;quot;::add-mask::$S3_SECRET_KEY&amp;quot;
    echo &amp;quot;S3_ACCESS_KEY_ID=$S3_ACCESS_KEY&amp;quot; &amp;gt;&amp;gt; $GITHUB_ENV
    echo &amp;quot;S3_SECRET_ACCESS_KEY=$S3_SECRET_KEY&amp;quot; &amp;gt;&amp;gt; $GITHUB_ENV
    echo &amp;quot;SSH_PUBLIC_KEY=$SSH_PUB_KEY&amp;quot; &amp;gt;&amp;gt; $GITHUB_ENV
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The only secrets stored in GitHub are &lt;code&gt;VAULT_ADDR&lt;/code&gt; and &lt;code&gt;VAULT_TOKEN&lt;/code&gt;. Everything else comes from Vault at runtime. I added the &lt;code&gt;::add-mask::&lt;/code&gt; lines, this ensures that these values don&amp;#39;t leak into logs.&lt;/p&gt;
&lt;h2&gt;The Full Pipeline&lt;/h2&gt;
&lt;p&gt;The complete workflow runs on pull requests and manual triggers. I specifically excluded pushes to main since I didn&amp;#39;t want plans running on production commits. The workflow runs in parallel for both dev and prod environments using a matrix strategy:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;jobs:
  terraform-validate:
    runs-on: self-hosted
    strategy:
      matrix:
        environment:
          - dev
          - prod
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Each environment goes through the same steps: format checking, initialization with the S3 backend, validation, and finally plan. The format check is set to &lt;code&gt;continue-on-error: true&lt;/code&gt; so it doesn&amp;#39;t block the rest of the pipeline, but the workflow fails at the end if formatting is off:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- name: Terraform fmt check
  id: fmt
  run: terraform fmt -check -recursive
  working-directory: terraform/
  continue-on-error: true

# ... other steps ...

- name: Fail on fmt error
  if: steps.fmt.outcome == &amp;#39;failure&amp;#39;
  run: |
    echo &amp;quot;Terraform fmt check failed. Run &amp;#39;terraform fmt -recursive&amp;#39; to fix.&amp;quot;
    exit 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The init step needs both S3 credentials for the backend and Vault credentials for the provider. Terraform reads S3 credentials from &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; and &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; environment variables, while Vault credentials are passed as Terraform variables:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-yaml&quot;&gt;- name: Terraform init (${{ matrix.environment }})
  id: init
  run: terraform init -reconfigure
  working-directory: terraform/envs/${{ matrix.environment }}
  env:
    AWS_ACCESS_KEY_ID: ${{ env.S3_ACCESS_KEY_ID }}
    AWS_SECRET_ACCESS_KEY: ${{ env.S3_SECRET_ACCESS_KEY }}

- name: Terraform plan (${{ matrix.environment }})
  id: plan
  run: terraform plan -no-color
  working-directory: terraform/envs/${{ matrix.environment }}
  env:
    AWS_ACCESS_KEY_ID: ${{ env.S3_ACCESS_KEY_ID }}
    AWS_SECRET_ACCESS_KEY: ${{ env.S3_SECRET_ACCESS_KEY }}
    TF_VAR_vault_address: ${{ secrets.VAULT_ADDR }}
    TF_VAR_vault_token: ${{ secrets.VAULT_TOKEN }}
  continue-on-error: true
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Right now this one runs validates, fmt and plan for Terraform changes and shows what would happen if I applied them. Great,  but the real goal for me is speeding up NixOS build which takes around 20 minutes and I want to get that under 5 minutes or less using this same self-hosted setup.&lt;/p&gt;
&lt;p&gt;For now though, I have a working Terraform CI pipeline that doesn&amp;#39;t leak secrets all over GitHub and runs on hardware I control. That&amp;#39;s a solid foundation to build on.&lt;/p&gt;
&lt;p&gt;Looking forward to what&amp;#39;s next....Happy Learning.&lt;br&gt;Have a blessssssssssssssssed day.. :)&lt;/p&gt;
</content:encoded></item><item><title>NixOS LXC on Incus</title><link>https://blog.thein3rovert.dev/posts/nixos/nixos-lxc-on-incus/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/nixos-lxc-on-incus/</guid><description>Building a custom CI/CD runner with NixOS and Incus PART 1</description><pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;./images/incus-image.png&quot; alt=&quot;alt text&quot;&gt;&lt;/p&gt;
&lt;p&gt;A few days ago I set up my own Git instance, Forgejo. As a certified tinkerer, it was about time I explored that space, but mainly because I wanted to set up my own custom runner for my workflows, and also because I thought it would be fun to figure out. Beyond that, running my own runner means I have full control over the build environment without relying on third-party services.&lt;/p&gt;
&lt;p&gt;I&amp;#39;ve also been experimenting with Incus as my virtualization method, and I thought... hmmm, it would be cool to run my custom runner on a minimal Incus LXC container. That&amp;#39;s when I decided to create a minimal NixOS LXC image that I could spin up quickly whenever I needed a new runner. I didn&amp;#39;t want the bloat that comes with minimal ISOs, so I decided to build my own custom image with exactly what I needed from the start.&lt;/p&gt;
&lt;h2&gt;Creating the Flake Structure&lt;/h2&gt;
&lt;p&gt;Before we build an image, we need to think about a few things..why we want the image, what it&amp;#39;ll be used for, how small it should be. In my case, I wanted an image to serve as a custom runner for my Forgejo instance.&lt;/p&gt;
&lt;p&gt;The entry point for managing a NixOS system is the &lt;code&gt;flake.nix&lt;/code&gt;, so we need a few inputs to get started.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;inputs = {
    nixpkgs.url = &amp;quot;github:nixos/nixpkgs?ref=nixos-unstable&amp;quot;;
    colmena.url = &amp;quot;github:zhaofengli/colmena&amp;quot;;
    disko = {
      url = &amp;quot;github:nix-community/disko&amp;quot;;
      inputs.nixpkgs.follows = &amp;quot;nixpkgs&amp;quot;;
    };
    nixos-generators = {
      url = &amp;quot;github:nix-community/nixos-generators&amp;quot;;
      inputs.nixpkgs.follows = &amp;quot;nixpkgs&amp;quot;;
    };
  };
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;nixpkgs&lt;/code&gt; input is the main NixOS package repository containing all packages and NixOS modules. &lt;code&gt;disko&lt;/code&gt; handles declarative disk partitioning and formatting right from the Nix config. &lt;code&gt;nixos-generators&lt;/code&gt; lets us generate different image formats like LXC, ISO, or VM images from a NixOS configuration. &lt;code&gt;colmena&lt;/code&gt; is what I&amp;#39;ll use later to deploy NixOS configs to remote servers.&lt;/p&gt;
&lt;p&gt;Then we need to add the NixOS configuration that takes in the configuration to be applied to the image. This configuration lives in &lt;code&gt;configuration.nix&lt;/code&gt;. I won&amp;#39;t include the full config here since it&amp;#39;s quite long, but there&amp;#39;s a link to the repo at the end if you want to make your own.&lt;/p&gt;
&lt;p&gt;The NixOS configuration uses nixos-generators to build a bootable LXC container image (rootfs plus metadata) from your &lt;code&gt;configuration.nix&lt;/code&gt;, which can then be imported into Incus.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  packages.x86_64-linux = {
	lxc = nixos-generators.nixosGenerate {
	  system = &amp;quot;x86_64-linux&amp;quot;;
	  pkgs = nixpkgs.legacyPackages.x86_64-linux;
	  modules = [
		./configuration.nix
	  ];
	  format = &amp;quot;lxc&amp;quot;;
	};
	lxc-meta = nixos-generators.nixosGenerate {
	  system = &amp;quot;x86_64-linux&amp;quot;;
	  pkgs = nixpkgs.legacyPackages.x86_64-linux;
	  modules = [ ./configuration.nix ];
	  format = &amp;quot;lxc-metadata&amp;quot;;
	};
  };
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The config is reusable, so make sure to check the repo...just copy the &lt;code&gt;flake.nix&lt;/code&gt; and &lt;code&gt;configuration.nix&lt;/code&gt;, and make sure to change the &lt;code&gt;hostname&lt;/code&gt; and &lt;code&gt;username&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  nixosConfigurations.&amp;lt;HOSTNAME:CHANGE ME&amp;gt; = nixpkgs.lib.nixosSystem {
	system = &amp;quot;x86_64-linux&amp;quot;;
	modules = [
	  disko.nixosModules.disko
	  ./configuration.nix
	];
  };
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;  users.users.&amp;lt;USERNAME:CHANGE ME&amp;gt; = {
    isNormalUser = true;
    extraGroups = [
      &amp;quot;networkmanager&amp;quot;
      &amp;quot;wheel&amp;quot;
    ];
    openssh.authorizedKeys.keys = [ &amp;quot;ENTER-SSH-KEY&amp;quot; ];
  };
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Building the Image&lt;/h2&gt;
&lt;p&gt;Now we build the image with &lt;code&gt;nix build .#lxc&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;After that, we send the image over to the server and run the incus command to add the image to the Incus images list. Incus doesn&amp;#39;t support NixOS images built with nixos-generator directly, which was frustrating to figure out.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; thein3rovert@marcus  ~/images/nixos  incus image import nixos-image-lxc-proxmox-26.05.20251205.f61125a-x86_64-linux.tar.xz  nixos-image-lxc-26.05.20251205.f61125a-x86_64-linux.tar.xz --alias nixos-runner           ✔  102  00:12:29
Error: Metadata tarball is missing metadata.yaml
 thein3rovert@marcus  ~/images/nixos 
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I think I figured out the issue. I was using the nixos-generator &lt;code&gt;proxmox-lxc&lt;/code&gt; format instead of just the plain &lt;code&gt;lxc&lt;/code&gt; format, so the format was wrong. The issue I likely hit before was using &lt;code&gt;proxmox-lxc&lt;/code&gt; format alone that generates a Proxmox-specific tarball without the &lt;code&gt;metadata.yaml&lt;/code&gt; that Incus requires. The &lt;code&gt;lxc-metadata&lt;/code&gt; format generates exactly that metadata tarball.&lt;/p&gt;
&lt;p&gt;We need to create separate folders for both the metadata and the LXC image since they have the same name.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; thein3rovert@marcus  ~/images/nixos  ls
result-meta
 thein3rovert@marcus  ~/images/nixos  mkdir result-lxc
 thein3rovert@marcus  ~/images/nixos  ls
result-lxc  result-meta
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The command to import into Incus is: metadata comes first, then the image second.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus image import \
  result-meta/*.tar.xz \
  result-lxc/*.tar.xz \
  --alias nixos-runner
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Done, the image has been imported.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; thein3rovert@marcus  ~/images/nixos  incus image import \
   result-meta/*.tar.xz \
   result-lxc/*.tar.xz \
   --alias nixos-runner
Image imported with fingerprint: ed79d35efadf77f2fdf046ed4b76a9692cf160c3efc12a2b9494e2d2689e55a6
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now let me verify with &lt;code&gt;incus image list&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Here below is a pretty minimal image that will serve my needs and many more to come.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus image list
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;ALIAS         FINGERPRINT    PUBLIC  DESCRIPTION                                     ARCHITECTURE  TYPE       SIZE      UPLOAD DATE
nixos-runner  ed79d35efadf  no      NixOS Yarara lxc-26.05.20251205.f61125a       x86_64        CONTAINER  213.74MiB  2026/02/21
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now let&amp;#39;s launch a new LXC with the added image.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus launch nixos-runner nixos-runner
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And there we go...it works like it&amp;#39;s supposed to. Now my aim is to use Nix as my own custom CI/CD runner.&lt;/p&gt;
&lt;h2&gt;Managing Deployment with Colmena&lt;/h2&gt;
&lt;p&gt;I had a bit of a problem after the creation. When I entered the created nixos-runner container, I was unable to connect to it or ping it from my main management server where I planned to manage it. Here&amp;#39;s how the flow works...I have my main server (NixOS) and then have another server (NixOS on Marcus) on the same network at 10.10.10.x. Then on Marcus, I have Incus running nixos-lxc. But Incus wasn&amp;#39;t getting the IP from Marcus, which was supposed to be 10.10.10.x. Instead, it gets a different one that only Marcus can connect to and not my main NixOS.&lt;/p&gt;
&lt;p&gt;To solve this, I had to use something called SSH &lt;code&gt;proxy-jump&lt;/code&gt;. ProxyJump means SSH through an intermediate server to reach a destination that isn&amp;#39;t directly accessible. My main server can&amp;#39;t reach 10.127.42.183 directly (it&amp;#39;s a private network on Marcus), so it SSHs into Marcus first, then Marcus SSHs into the container on your behalf. To you, it feels like one direct connection.&lt;/p&gt;
&lt;p&gt;One issue with the image I built is that the hostname I set for the image when I built it didn&amp;#39;t happen to apply on the image after it was built, so I guess I&amp;#39;d have to figure out why that is. Other settings applied, but only the hostname didn&amp;#39;t.&lt;/p&gt;
&lt;p&gt;Apart from that, everything else looks fine. Now the next step is to apply new config using colmena. In my flake.nix, I added a new colmena deployment.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;	# ---- Node: Runner [ lxc 02 ] ----
	nixos-runner = {
	  deployment = {
		targetHost = &amp;quot;nixos-runner&amp;quot;;
		targetPort = 22;
		targetUser = &amp;quot;thein3rovert&amp;quot;;
		buildOnTarget = true;
		tags = [
		  &amp;quot;prod&amp;quot;
		  &amp;quot;runner&amp;quot;
		];
	  };
	  nixpkgs.system = &amp;quot;x86_64-linux&amp;quot;;
	  imports = [
		./hosts/runner
		agenix.nixosModules.default
		self.nixosModules.users
		self.nixosModules.containers
		self.nixosModules.nixosOs
		self.nixosModules.snippets
	  ];
	};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Upon applying my new config, I ran into a sandboxing issue. I think I&amp;#39;ve had this issue before on a previous day while I was building an image for Proxmox.&lt;/p&gt;
&lt;p&gt;Sandboxing isolates each build in its own restricted environment so it can&amp;#39;t access the internet or files outside its declared dependencies, ensuring reproducible builds. LXC containers don&amp;#39;t have the kernel namespaces required to create that isolation, hence the error.&lt;/p&gt;
&lt;p&gt;To disable that, I added the config &lt;code&gt;nix.setting.sandbox = false&lt;/code&gt;. But my build was still failing, so I needed to do it manually. NixOS being a read-only system...I can&amp;#39;t edit the &lt;code&gt;/etc/nix/nix.conf&lt;/code&gt; so I had to edit the nix config in the root dir instead before my build successfully applied.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;mkdir -p /root/.config/nix
echo &amp;quot;sandbox = false&amp;quot; &amp;gt;&amp;gt; /root/.config/nix/nix.conf
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Yay, welcome me nixos-runner&lt;/p&gt;
</content:encoded></item><item><title>Agent as Code</title><link>https://blog.thein3rovert.dev/posts/nixos/agent-as-code/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/agent-as-code/</guid><description>Centralizing My AI Agent Workflow</description><pubDate>Wed, 18 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;./images/agent-as-code.png&quot; alt=&quot;alt text&quot;&gt;&lt;/p&gt;
&lt;p&gt;I used to not be a big fan of AI. I used to also not be a fan of &amp;quot;agentic workflows&amp;quot; as a concept. But I&amp;#39;d be lying if I said AI hasn&amp;#39;t been genuinely helpful in ways I didn&amp;#39;t expect.&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s the thing about me: I lose focus when I have to repeat the same process over and over. Every. Single. Time. I&amp;#39;ll be working on a project, hit a repetitive task, do it manually once or twice, and then just... quit. I don&amp;#39;t want to do it again. So the project dies.&lt;/p&gt;
&lt;p&gt;This has happened more times than I can count, but I&amp;#39;ve been noticing a significant improvement over time since I started using AI agents for some of my daily tasks.&lt;/p&gt;
&lt;h2&gt;How AI Found Me&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;First: the boring stuff.&lt;/strong&gt; I don&amp;#39;t want to automate everything...I mean...I actually enjoy certain manual tasks. But the repetitive stuff that drains me? The &amp;quot;I know how to do this, I just don&amp;#39;t want to do it again&amp;quot; stuff? AI handles that now. It&amp;#39;s not glamorous, but it&amp;#39;s real.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Second: my journals.&lt;/strong&gt; I write every day. Seven days a week, 2000-3000 words on what went wrong, what went right, ideas I had, things I learned, weaknesses I noticed, personal projects I worked on. Writing daily has become a habit I take seriously.&lt;/p&gt;
&lt;p&gt;But here&amp;#39;s the problem: what good are all those notes if they just sit there? I needed them to actually improve my habits, my work, my life. So I started using AI to process all that raw journal material into actionable insights. That alone has been worth it. I won&amp;#39;t go into details on how I created an automated workflow that acts as my personal coach and mentor in this post, but I will share how I&amp;#39;ve been able to manage my agentic workflow.&lt;/p&gt;
&lt;h2&gt;The Real Problem: Too Many Tools&lt;/h2&gt;
&lt;p&gt;There are so many AI tools now. Claude Code. Gemini CLI. Opencode. Copilot. Every week something new drops. And each one has its own context folder..&lt;code&gt;.claude/&lt;/code&gt;, &lt;code&gt;.copilot/&lt;/code&gt;, &lt;code&gt;.gemini/&lt;/code&gt;, etc. Rules for this tool, references for that one, commands scattered everywhere.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rwxr-xr-x    - thein3rovert 19 Feb 21:59  .claude
.rw------- 2.8k thein3rovert 19 Feb 21:59  .claude.json
.rw-------  92k thein3rovert  8 Feb 19:20  .claude.json.backup
.rw------- 2.8k thein3rovert 19 Feb 21:59  .claude.json.backup.1771538342266
.rw------- 2.8k thein3rovert 19 Feb 21:59  .claude.json.backup.1771538342270
.rw------- 2.8k thein3rovert 19 Feb 21:59  .claude.json.backup.1771538342475
.rw------- 2.8k thein3rovert 19 Feb 21:59  .claude.json.backup.1771538342941
.rw------- 2.8k thein3rovert 19 Feb 21:59  .claude.json.backup.1771538343914
drwxr-xr-x    - thein3rovert 10 Dec  2025  .copilot
drwxr-xr-x    - thein3rovert 10 Dec  2025  .opencode
rwxr-xr-x    - thein3rovert 19 Feb 21:59  .gemini-cli
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I&amp;#39;d be working on a task with Claude Code, build up a nice set of rules and references for it, then switch to try another tool because it has a free model or whatever. And suddenly I had to recreate all those documents. My agent configurations were scattered across my system. No version control. No single source of truth. Just chaos.&lt;/p&gt;
&lt;p&gt;That&amp;#39;s where the productivity gain disappears. Not from using AI...but from managing all the noise around it.&lt;/p&gt;
&lt;p&gt;Not just that, what if you want to manage permissions? Each AI agent CLI has something called agent.json or some sort of JSON file where you can add the default model for a specific agent, the name, permissions on things the agent can access, commands it can run and more. Having to cd into a &amp;quot;.agentname&amp;quot; can just be a productivity killer and it makes us just forget about restricting our agent to certain permissions and commands, and maybe one day the agent runs &lt;code&gt;rm -rf /&lt;/code&gt; on us because it was hallucinating. What I am driving at here is all of this can be done in one centralized project/dir regardless of where the agent main dir is situated, with the help of Nix and home-manager. You might then be thinking, I don&amp;#39;t know Nix, what is Nix? Well, you can also do it with bash, which I am sure every engineer is familiar with. I won&amp;#39;t be sharing how to do this with bash, but reading how I did it with Nix should give you ideas on how to do it with bash.&lt;/p&gt;
&lt;h2&gt;Why Nix? Why Home-Manager?&lt;/h2&gt;
&lt;p&gt;You might wonder: why not just use a shared folder and symlinks? Why bring Nix into this?&lt;/p&gt;
&lt;p&gt;Fair question. Here&amp;#39;s my answer:&lt;/p&gt;
&lt;p&gt;I already manage my entire dotfiles and system configuration with Nix. My &lt;a href=&quot;https://github.com/thein3rovert/nixos-config&quot;&gt;nixos-config&lt;/a&gt; handles everything..my terminals, my editors, my development environments, my dotfiles. When I rebuild, my entire setup is reproducible from a single &lt;code&gt;git clone &amp;amp;&amp;amp; nixos-rebuild switch&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;So when I thought about managing my AI agent configs, the answer was obvious: do it the same way I do everything else. Home-manager already handles XDG config directories. My agent resources are just more files. Why would I maintain them separately?&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;xdg.configFile = {
  &amp;quot;opencode/commands&amp;quot; = { source = &amp;quot;${inputs.polis}/commands&amp;quot;; recursive = true; };
  &amp;quot;opencode/context&amp;quot; = { source = &amp;quot;${inputs.polis}/context&amp;quot;; recursive = true; };
  &amp;quot;opencode/prompts&amp;quot; = { source = &amp;quot;${inputs.polis}/prompts&amp;quot;; recursive = true; };
  &amp;quot;opencode/skills&amp;quot; = { source = &amp;quot;${inputs.polis}/skills&amp;quot;; recursive = true; };
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;One place. One repo. One rebuild. Everything stays in sync.&lt;/p&gt;
&lt;p&gt;And when I get a new machine? The flake pulls down, I run &lt;code&gt;home-manager switch&lt;/code&gt;, and all my agent configs, skills, and context are there. No manual setup. No &amp;quot;I forgot to copy that folder&amp;quot; moments.&lt;/p&gt;
&lt;h2&gt;The Agents: Meet Arkadia&lt;/h2&gt;
&lt;p&gt;I&amp;#39;ve built out agents with distinct personalities. The main ones:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arkadia&lt;/strong&gt;: Named after the sanctuary built from the Alpha Station in &lt;em&gt;The 100&lt;/em&gt; (yes, I&amp;#39;m a fan). It&amp;#39;s my personal assistant in &amp;quot;Plan Mode.&amp;quot; Read-only analysis, planning, guidance. It knows my context: software engineer, PARA methodology, early mornings for deep work, evening daily reviews. It routes requests to the right skills and stays out of the way.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Arkadia-Forge&lt;/strong&gt;: Same assistant but in &amp;quot;Worker Mode.&amp;quot; Full write access, but with safety prompts for destructive operations. This is the one I use when I actually want to get things done, not just plan them.&lt;/p&gt;
&lt;p&gt;Then there are others..Prometheus for orchestration, Hephaestus for building, Sisyphus for running commands (heavily restricted for safety), Librarian and Explore for research. These are built-in agents that came with the opencode installation, but I barely use them because they aren&amp;#39;t really tailored to my personal workflow and needs.&lt;/p&gt;
&lt;p&gt;I would always suggest creating agents that suit your needs. You alone know why you do on a daily basis to accomplish a task, how you achieve a task to the best of your ability, so why not train your agent to do the same and refine as much as you want until you reach a desired state.&lt;/p&gt;
&lt;h2&gt;The Skill System&lt;/h2&gt;
&lt;p&gt;The skills are what make this actually useful, dont confuse skills for agents. Each skill is a small, focused module that teaches my agent how to handle specific types of work. The task-management skill knows my PARA methodology and Obsidian setup, understanding where notes should go and how to format them. The reflection skill processes my daily journal entries into actual insights I can use. The communications skill helps with drafting emails and follow-ups. The research skill handles investigation workflows, and the knowledge-management skill keeps my notes organized and searchable.&lt;/p&gt;
&lt;p&gt;Skills to me are what I use to train my agent on how to go about doing a specific task. Say, for example, I just finish having a planning session with my agent &lt;code&gt;Arkadia&lt;/code&gt; on how to go about building a CI/CD pipeline.&lt;br&gt;&lt;img src=&quot;./images/agent-as-code-arkadia.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;The conversation goes on and on, ideas spill out alongside my coffee, and we finally arrive at a destination. But then I don&amp;#39;t want to just close the session or tell it to implement. I want to have a summary of our discussion: what I suggested, what we decided not to do, problems we had, solutions we suggested, and what we finally went with, in a short summary and then a list of todos so I can follow along step by step. Later, when I need to use &lt;code&gt;Arkadia Forge&lt;/code&gt; for the implementation and execution, I can just tell it to go straight to this note, saving me time to prompt again and it gets the full context of what I want to do.&lt;/p&gt;
&lt;p&gt;Arkadia will use the task-management skill to perform the actions, and that skill still has all my note structure, knows where it&amp;#39;s supposed to save the note, how to format it, the location of references and similar notes and more.&lt;br&gt;&lt;img src=&quot;./images/aac-quick-chat.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;Each skill has a SKILL.md as the entry point, with optional scripts/, references/, and assets/ directories.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;skills/
├── agent-development/
├── brainstorming/
├── doc-translator/
├── excalidraw/
│   ├── references/
│   └── SKILL.md
├── frontend-design/
│   └── SKILL.md
├── mem0-memory/
├── memory/
│   ├── references/
│   └── SKILL.md
├── obsidian/
│   └── SKILL.md
├── outline/
├── outlook/
├── pdf/
└── reflection/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When Arkadia receives a request, it routes to the appropriate skill based on intent.&lt;/p&gt;
&lt;h2&gt;What This Gives Me&lt;/h2&gt;
&lt;p&gt;This setup gives me a single source of truth for everything. When I edit a skill, every agent can use it immediately without copying. Changes are tracked in git, so I can review what changed, revert if needed, and see my thinking over time. When I get a new machine, one flake update pulls down all my agent configs, skills, and context. The knowledge is decoupled from any specific tool, so I&amp;#39;m not locked in.&lt;/p&gt;
&lt;h2&gt;It&amp;#39;s Not Perfect!!! Runnnnn!!!!&lt;/h2&gt;
&lt;p&gt;I&amp;#39;ll be honest...this setup isn&amp;#39;t perfect.&lt;/p&gt;
&lt;p&gt;Agent configurations are embedded into opencode&amp;#39;s config.json at Nix evaluation time. That means when I change an agent definition, I need to rebuild for it to take effect. Skills and commands are symlinked, so changes appear immediately, but it&amp;#39;s still a compromise.&lt;/p&gt;
&lt;p&gt;But here&amp;#39;s what I&amp;#39;ve learned: treating my AI workflows as code..modular, versioned, declarative..has made them more maintainable. When I improve a skill or write a new command, I do it once and every agent immediately has access, so I don&amp;#39;t have to copy prompts between tools or wonder where I put that reference.&lt;/p&gt;
&lt;h2&gt;The Honest Take&lt;/h2&gt;
&lt;p&gt;I&amp;#39;m not sold on the AI hype. I don&amp;#39;t think AI is going to take over the world or whatever. But I&amp;#39;m practical about what helps.&lt;/p&gt;
&lt;p&gt;This helps.&lt;/p&gt;
&lt;p&gt;Not because it&amp;#39;s AI. But because it solves a real problem I had. too many tools, too much scattered context, too much friction. And at the end of the day, that&amp;#39;s what good systems do: they reduce friction so you can focus on what actually matters.&lt;/p&gt;
&lt;p&gt;May we meet again.&lt;/p&gt;
</content:encoded></item><item><title>LTSP network on Incus</title><link>https://blog.thein3rovert.dev/posts/nixos/setting-up-an-ltsp-network-with-lxd/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/setting-up-an-ltsp-network-with-lxd/</guid><description>Setting up ltsp network on incus lxd journey</description><pubDate>Sat, 10 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Today I woke up quite earlier than usual and couldn&amp;#39;t go back to sleep, so I decided to work on something with the little time I had. I&amp;#39;ve been putting off this project for a while now because I pretend that I don&amp;#39;t have the time, haha.&lt;/p&gt;
&lt;p&gt;The project involves emulating a complete LTSP network. LTSP stands for Linux Terminal Server Project... it makes maintaining tens or hundreds of diskless clients as easy as maintaining a single PC. The terminals (client machines) boot over the network without needing any permanent storage attached.&lt;/p&gt;
&lt;p&gt;In this setup, both the ltsp_server and ltsp_client run as virtual machines on a physical machine with LXD installed. The main architecture is just two computers... one for the server and one client. Later we could add any number of clients easily.&lt;/p&gt;
&lt;p&gt;The setup aims to do two things. First, it roughly matches the physical network topology where the server has two connections... one to the internet and one to a switch. Second, it helps us get around an issue that we might run into when getting an LTSP network running with LXD.&lt;/p&gt;
&lt;p&gt;For this setup, you need to have Incus or some sort of LXD installed on your system. I decided to go with LXD because it gives you the ability to manage both virtual machines and system containers. We could do this with just a single application in a container if we used something like Docker, but we&amp;#39;re not doing that because we need to run an entire Linux operating system... which is possible through the Linux kernel LXC system.&lt;/p&gt;
&lt;p&gt;I already had Incus installed on my system since I use it to spin up LXC containers for some of my services like Garage. I wanted that isolation, so I didn&amp;#39;t need to worry about installing anything new on my server.&lt;/p&gt;
&lt;p&gt;The setup works like this... the LTSP client is set up as a virtual machine booting via iPXE, and the LTSP server itself runs as an LXC container. iPXE is basically a network boot firmware that lets machines boot from the network instead of a local disk.&lt;/p&gt;
&lt;p&gt;The big benefit of this setup is that it lets you have your own personal development machine without having to install something like Proxmox on bare metal or set up a dedicated server machine just for personal use.&lt;/p&gt;
&lt;h2&gt;The Virtual Network&lt;/h2&gt;
&lt;p&gt;For the virtual network, we need two bridge networks. The first one is called lxdbr0 and the second is lxdbr1. I already had the first bridge network since it came by default when I installed Incus LXD, so I only needed to create the second one.&lt;/p&gt;
&lt;p&gt;You can think of these bridge networks as physical switches... they basically allow containers to communicate with anything that&amp;#39;s attached to the bridge. The client and the server will communicate through the second bridge, lxdbr1.&lt;/p&gt;
&lt;p&gt;One really nice thing about LXD is that it provides containers with IP addresses using DHCP and gives them internet access through NAT. This means our first network lxdbr0 acts as the internet router in the physical network topology.&lt;/p&gt;
&lt;p&gt;Since I already had the first bridge network, I just had to create the second one. This bridge network is what we&amp;#39;ll use to connect all the instances we create. lxdbr1 uses the LTSP standard subnet which is 192.168.67.1/24. I disabled NAT, DHCP, and IPv6 addresses because LTSP will be providing a PXE-enabled DHCP for the client to enable network booting.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus network create lxdbr1 \
  ipv4.address=192.168.67.1/24 \
  ipv4.nat=false \
  ipv4.dhcp=false \
  ipv6.address=none
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After creating lxdbr1, I confirmed the network was created using this command:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus network list

RESULT
+----------+----------+---------+-------------------+---------------------------+---------------------------------+---------+---------+
| lxdbr1   | bridge   | YES     | 192.168.67.1/24   | none                      | Custom: lxd tutorial bridge     | 2       | CREATED |
+----------+----------+---------+-------------------+---------------------------+---------------------------------+---------+---------
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Perfect... now I have both bridge networks I need.&lt;/p&gt;
&lt;p&gt;Next step was to create the server container. For the container, I went with Linux Mint since it&amp;#39;s lightweight.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus init images:mint/xia ltsp-server
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you want to view the list of available container images, you can use this command:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus image list images: | grep mint
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Security Configuration&lt;/h2&gt;
&lt;p&gt;In order to get LTSP working in a server container, there are some LXD security settings we need to relax. This isn&amp;#39;t ideal for production, but since this is for a development environment, it&amp;#39;s fine. We need to enable security nesting and security privileged mode. The reason is that LTSP wasn&amp;#39;t designed as a confined workload... it assumes root-level access. Without relaxing these settings, it won&amp;#39;t function properly.&lt;/p&gt;
&lt;p&gt;When we set security.nesting to true, this allows the container to create nested namespaces and perform operations that look like container-inside-container behavior. Setting security.privileged to true removes the user namespace mapping, giving the container near-host privileges. Without this, mounting filesystems, exporting NFS roots, or accessing kernel features needed for PXE and client boot often fail in subtle ways.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Enable nesting and privileged mode
incus config set ltsp-server security.nesting true
incus config set ltsp-server security.privileged true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;LTSP also needs access to loop devices for mounting images. This required three steps:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Set cgroup permissions for loop devices
incus config set ltsp-server raw.lxc &amp;quot;lxc.cgroup2.devices.allow = b 7:* rwm
lxc.cgroup2.devices.allow = c 10:237 rwm&amp;quot;

# Add loop-control device
incus config device add ltsp-server loop-control unix-char \
  path=/dev/loop-control source=/dev/loop-control

# Count and add all loop devices from host
HOST_LOOP_COUNT=$(ls /dev/loop[0-9]* 2&amp;gt;/dev/null | wc -l)
echo &amp;quot;Found $HOST_LOOP_COUNT loop devices on host&amp;quot;

for i in $(seq 0 $((HOST_LOOP_COUNT - 1))); do
  if [ -e /dev/loop$i ]; then
    incus config device add ltsp-server loop$i unix-block \
      path=/dev/loop$i source=/dev/loop$i
  fi
done
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Optionally, you can bump up the resources allocated to the server. I did this so that later, when building a compressed image of the server&amp;#39;s filesystem, it runs quickly. Adjust these as you see fit:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus config set ltsp-server limits.cpu=5
incus config set ltsp-server limits.memory=3GiB
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Server Network Configuration&lt;/h2&gt;
&lt;p&gt;Now for the server network configuration, we need to have two virtual network interface cards. One will be attached to lxdbr0, which will take an IP address from DHCP since it&amp;#39;s enabled by default. The second interface card will be attached to lxdbr1, which we already disabled DHCP for. We&amp;#39;ll also need to attach a static IP address to the second interface card. Since LTSP handles things differently, we need to use the distribution&amp;#39;s standard method to assign an IP address from within the container itself.&lt;/p&gt;
&lt;p&gt;The first interface card is already connected to lxdbr0 by default, so I only needed to configure the second one manually.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus config device add ltsp-server eth1 nic \
  network=lxdbr1 \
  name=eth1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After connecting the interface, we need to start the server and confirm that the server has an IP address from DHCP on eth0 and no IPv4 on eth1.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus start ltsp-server

incus exec ltsp-server -- ip --brief address
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And as you can see below, we have an IP address on eth0 and nothing on eth1, exactly as expected.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;lo     UNKNOWN  127.0.0.1/8
eth0   UP       10.135.108.178/24 ... &amp;lt;----- comes from LXD&amp;#39;s DHCP
eth1   UP        ... &amp;lt;--- no IPv4 address
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To wrap up the network configuration, we need to create a netplan file for eth1 with a static IP so it persists across reboots.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus exec ltsp-server -- bash -c &amp;#39;cat &amp;gt; /etc/netplan/60-ltsp-static.yaml &amp;lt;&amp;lt; EOF
network:
  version: 2
  ethernets:
    eth1:
      addresses:
        - 192.168.67.2/24
EOF&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then fix the permissions so netplan doesn&amp;#39;t complain:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus exec ltsp-server -- sh -c &amp;#39;chmod 600 /etc/netplan/*.yaml&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And finally, apply the changes:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus exec ltsp-server -- netplan apply
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I verified again that the changes took effect:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;incus exec ltsp-server -- ip --br -4 a
lo      UNKNOWN  127.0.0.1/8
eth0    UP       10.135.108.178/24
eth1    UP       192.168.67.2/24 # &amp;lt;-- now set
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So that&amp;#39;s all about setting up the LTSP server. I&amp;#39;ll make a new post on setting up the client. Hope you enjoyed it.&lt;/p&gt;
</content:encoded></item><item><title>Backups the hard way</title><link>https://blog.thein3rovert.dev/posts/nixos/backups-the-hard-way/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/backups-the-hard-way/</guid><description>My journey to taking backup seriously.</description><pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;./images/backup-the-hard-way.png&quot; alt=&quot;alt text&quot;&gt;&lt;/p&gt;
&lt;h1&gt;Learning About Backups the Hard Way&lt;/h1&gt;
&lt;p&gt;Backups were never really something that I took seriously. I don&amp;#39;t know why... maybe it&amp;#39;s because I&amp;#39;d never really experienced a system failure before, or maybe because when I did, I eventually didn&amp;#39;t end up losing that much data. I just relied on my system as long as it looked clean and things were running pretty well. I assumed that no unfortunate event would cause my laptop to get destroyed or something. That mindset changed pretty quickly.&lt;/p&gt;
&lt;p&gt;About six months ago, I had an unfortunate event with my homelab. I got this new mini PC that was just about seven months old at the time. This PC was what I used as my management node... I used it to manage all my other machines and nodes on my servers in my homelab. I never really foresaw my PC having an NVMe failure because I believed it was a new PC. I had a lot of data, a lot of important documents, and a lot of important notes on this PC. I even had my second brain on this PC. My second brain is like my notes folder where I write down things I learn and other things.&lt;/p&gt;
&lt;p&gt;On this day, I had just finished updating my system to the latest version, so I decided to go downstairs to get a cup of tea after the long updating and patching process. I didn&amp;#39;t know that the latest version of the operating system I was running had a package that wasn&amp;#39;t being maintained. The package consumed quite a lot of RAM and memory, and that made my system overload. It was running at one hundred percent while I was busy making my tea downstairs. When I got back to the system, I realized that my system had stopped working. I couldn&amp;#39;t move my mouse. I could not do anything on it. I tried to reboot the system and found out that my drive had been burnt due to the overload.&lt;/p&gt;
&lt;p&gt;I lost everything. When I say everything, I mean everything. I used that management system to save most of what I know... my documents, my everything. It was devastating. Eventually I got a new NVMe after about two months when I could save up and buy a new one. At that time, I started thinking about backups. Even though I didn&amp;#39;t really take backups seriously, I tried as much as possible to back up the most important things to my Google Drive, and to take it seriously enough to enable the backups that I was running on my servers and save them into an S3 bucket.&lt;/p&gt;
&lt;p&gt;You&amp;#39;d think I would have learned my lesson by then, right? But I didn&amp;#39;t.&lt;/p&gt;
&lt;p&gt;Not so long after that, I had another unfortunate problem with my work laptop, which I had just gotten. I thought maybe because it&amp;#39;s a new laptop and I haven&amp;#39;t really used it for and then it safe. I forgot to back up my notes to Google Drive. I had been taking notes since the start of my placement... I logged everything I did daily. I had a lot of notes and presentations from meetings I&amp;#39;d been to, conferences, conversations, and other important stuff that I kept for work. Unfortunately, I lost all of it when the system crashed. This put me in a really sad place for about a week. At that time, I started to take backups a lot more seriously, and I would say I&amp;#39;��m not Evangelist when it comes to backing up things on the system. I always make sure that I remind my friends... have you backed up your system? Have you backed up your phone? Have you backed up your backup? Have you backed up yourself? And stuff like that. So in order not all? I&amp;#39;m just going to work to my current you back up the most important thing on my system Currently, I only use two tools for backing up things on my system. The first one is Ansible, and the second one is Restic.&lt;/p&gt;
&lt;p&gt;Ansible is a widely used infrastructure tool called IaC. It&amp;#39;s mainly useful for managing multiple servers at once and can also be used for infrastructure automation and infrastructure deployment, and other cool stuff. However in this post we will not be talking about my ansible backup but my zerobyte backup because it&amp;#39;s easier to selfhost and easy to use, this cannot be used in an actual production setting, it&amp;#39;s best suitable for homelab use case.&lt;/p&gt;
&lt;p&gt;In order to run Zerobyte, you need to have Docker and Docker Compose installed on your server. Then, you can use the provided docker-compose.yml file to start the application.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;services:
  zerobyte:
    image: ghcr.io/nicotsx/zerobyte:v0.21
    container_name: zerobyte
    restart: unless-stopped
    cap_add:
      - SYS_ADMIN
    ports:
      - &amp;quot;4096:4096&amp;quot;
    devices:
      - /dev/fuse:/dev/fuse
    environment:
      - TZ=Europe/Paris # Set your timezone here
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/lib/zerobyte:/var/lib/zerobyte
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Do not run Zerobyte on a server that is accessible from the Internet. If you want to do so, make sure you have a VPS or change your port to something non-standard and use a secure SSH tunnel... something like Cloudflare or Tailscale.&lt;/p&gt;
&lt;p&gt;Also, do not try to point this specific volume to a network share because if you do, you&amp;#39;ll face permission issues and strong performance degradation.&lt;/p&gt;
&lt;p&gt;After running the command to start Zerobyte, you can visit the web UI on the specified port and you should get a minimal interface like this:&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image.png&quot; alt=&quot;alt text&quot;&gt;&lt;/p&gt;
&lt;p&gt;As you can see, the interface contains five main sections... the volume section, the repository section, the backup section, notification section, and the settings section.&lt;/p&gt;
&lt;p&gt;The volume section represents the source data that you want to backup. The repository represents where you want your encrypted backup to be stored. And the backup section is where you create your actual backup jobs once you already have your volume and repository configured. The backup is the main important part of the application.&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image-1.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;Zerobyte also offers a few backends that we can use for our repository... S3 bucket, Cloudflare R2 (which is what I use for my permanent backup), Google Cloud Storage, Azure Blob Storage, REST Server, SFTP, and local storage.&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image-4.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;One very interesting thing I love about Zerobyte is the ability to create snapshots. Zerobyte automatically helps you create snapshots of your backup. So in case you lose your data or if you need to restore your data from two weeks ago, you can basically just restore from the snapshots.&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image-3.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;To add a volume, navigate to the Volumes section in the web interface and click Create volume. You&amp;#39;ll need to fill in the volume name, volume type (SMB, NFS, WebDAV, Directory, etc.), and connection settings like credentials and paths. For local directories, you&amp;#39;ll need to mount them into the ZeroByte container first by adding a volume mapping in your docker-compose.yml, then select Directory as the volume type in the web interface.&lt;/p&gt;
&lt;p&gt;For creating repositories, navigate to the Repositories section, click Create repository, select the backend type (Local, S3, REST, rclone, etc.), and configure your connection settings.&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image-5.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;Once you have your volumes and repositories set up, you can create backup jobs.&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image-6.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;Navigate to the Backups section and click Create backup job. You&amp;#39;ll configure which volume to back up, which repository to store it in, set your schedule (daily, weekly, etc.), specify which files or directories to include, and set your retention policy... like keeping the last 10 snapshots, 5 dailies, 3 weeklies, and 2 monthlies. You can also set exclude patterns for specific files you don&amp;#39;t want backed up.&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image-7.png&quot; alt=&quot;alt text&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./images/backup-blog/image-8.png&quot; alt=&quot;alt text&quot;&gt;&lt;/p&gt;
&lt;p&gt;Backup jobs will run automatically according to your schedule, but you can also trigger manual backups using the Backup Now button.&lt;/p&gt;
&lt;p&gt;When you need to restore data, just navigate to the Backups section, select the backup job, choose a snapshot, browse the files and select what to restore, choose your restore location (original location or custom), and configure any restore options like overwrite mode.&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image-9.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;ZeroByte makes it easy to restore individual files or entire directories from any snapshot.&lt;/p&gt;
&lt;p&gt;So that&amp;#39;s a simple walkthrough on how to back up your files and data. If you&amp;#39;ve been postponing this, now is the time to back up your life.&lt;/p&gt;
&lt;p&gt;Zerobyte is very easy to use because of its simple minimalist web interface and structure. One last thing worth mentioning... Zerobyte also includes notifications.&lt;/p&gt;
&lt;p&gt;Notifications are important but not necessary. You can use them to confirm or verify if your backup has been completed or if your backup failed. It sends an alert to either your Discord or your Slack channel. I haven&amp;#39;t set up notifications on mine yet, but I plan to do that this week.&lt;br&gt;&lt;img src=&quot;./images/backup-blog/image-10.png&quot; alt=&quot;alt text&quot;&gt;&lt;br&gt;So that&amp;#39;s all about this blog post. If you enjoyed it, thank you so much for reading. Have a good day and don&amp;#39;t forget to back up.&lt;/p&gt;
</content:encoded></item><item><title>Information consumption and Management</title><link>https://blog.thein3rovert.dev/posts/nixos/consuming-information-2026/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/consuming-information-2026/</guid><description>My plan to consuming and managing information this year.</description><pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I started my morning creating my calendar for the weeks ahead this year. I used to avoid calendars, but they&amp;#39;ve been helping me keep my focus lately. I always had this fear of managing two calendars since I also use one for work that contains mainly work-related stuff. I can put personal events on the work calendar, but I want to give having separate ones a shot. It shouldn&amp;#39;t be that draining to manage two calendars, and this personal one connects to my phone so I&amp;#39;ll get reminders before any task occurs. This year I&amp;#39;m giving calendar notifications a real try to see how useful they can be for me.&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s how I plan to consume and manage information this year. I have a now.txt.md file which is what I&amp;#39;m currently using to write this. Its main usefulness is basically capturing what I just worked on or what I&amp;#39;m currently working on... anything happening or that happened in the moment. In short, it&amp;#39;s like work daily journals but I don&amp;#39;t want to call it daily journal since it can end up being really long sometimes and sometimes contains code snippets for my debugging process when I don&amp;#39;t want to create new notes for debugging.&lt;/p&gt;
&lt;p&gt;I do have a daily journal but I mainly use this for work. I write basically everything going on at work, my todos and other things. Later I&amp;#39;ll write about how I organize my daily journal for work and how I use it.&lt;/p&gt;
&lt;p&gt;I also have a now-work.txt.md file. Don&amp;#39;t mind the name... it&amp;#39;s just a way to trick my brain to just write whatever right now. I use this just like my personal now.txt.md. I write whatever is going on at the moment at work. You might be wondering why I don&amp;#39;t just write it in my work journal. I did try that, but I always want to make sure my work journal is short and detailed so I can easily read up on it when I have stand-ups. I want to make sure it doesn&amp;#39;t contain unnecessary information that should be in the now-work.txt.md like my debugging process, plans for work-related projects, workflows, code snippets and more. All these go into the now-work.txt.md.&lt;/p&gt;
&lt;p&gt;You might be thinking that&amp;#39;s too much, but it&amp;#39;s really not once you get used to it. I realized that work-related stuff can sometimes conflict with personal things and I needed a better way to create that separation, especially when you&amp;#39;re keeping work and personal notes on the same laptop. Having two different laptops for writing moments of your day is just draining.&lt;/p&gt;
&lt;p&gt;In general, here&amp;#39;s the breakdown. Now.txt for personal stuff, now-work.txt for work stuff, and a daily journal specifically for work.&lt;/p&gt;
&lt;p&gt;Now.txt is a really powerful technique to trick your brain into writing a lot more and keep your thoughts flowing. You don&amp;#39;t have to keep creating new notes and folders each time you get an idea or want to quickly jot down something. Once you get used to it, your brain knows that for everything, it should remind you to use now.txt. Then later when you have time, you can separate them into new notes.&lt;/p&gt;
&lt;p&gt;Now.txt is also useful when you want to know what you did the previous day or the day before or maybe weeks before. You don&amp;#39;t have to worry about opening a lot of folders and finding the exact notes. Just open your now.txt and navigate to the date. I&amp;#39;ll write later on how I organize my now.txt, which I think is the best way since it helps a lot with information retrieval.&lt;/p&gt;
&lt;p&gt;So yeah, that&amp;#39;s still my plan for consuming and managing information this year.&lt;/p&gt;
</content:encoded></item><item><title>Optimizing todo for AI workflow</title><link>https://blog.thein3rovert.dev/posts/nixos/todo-extraction-for-ai-workflow/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/todo-extraction-for-ai-workflow/</guid><description>Optimizing my ticket creating workflow</description><pubDate>Sat, 27 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I maintain a &lt;code&gt;now.txt.md&lt;/code&gt; note in Obsidian where I capture all my daily ideas and tasks. I built an n8n workflow that reads this note and uses GitHub Copilot CLI to automatically create tickets from my TODOs. The problem was that I was sending the entire note every time, which sometimes contains hundreds of words of context that wasn&amp;#39;t needed. This approach meant higher API costs and, worse, creating duplicate tickets when Copilot encountered the same tasks it had already processed.&lt;/p&gt;
&lt;p&gt;You might be wondering why I even want to automate this process. I think automating ticket creation from notes removes the friction between having an idea and getting it tracked in my project management system. But I had some concerns. Sending large notes repeatedly to AI services was getting expensive. Without tracking what had been processed, I kept getting duplicate tickets. And sometimes I wanted to reprocess old dates or extract specific days, but my workflow wasn&amp;#39;t flexible enough for that.&lt;/p&gt;
&lt;h2&gt;My Initial Solution&lt;/h2&gt;
&lt;p&gt;I needed a bash script that could extract just the &lt;code&gt;## Tasks&lt;/code&gt; section from my note instead of passing everything. Simple enough... but the real challenge was preventing duplicates while maintaining flexibility.&lt;/p&gt;
&lt;h2&gt;What I Built&lt;/h2&gt;
&lt;p&gt;The solution evolved into a smart extraction script with state tracking. Here&amp;#39;s how it works:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-zsh&quot;&gt;#!/usr/bin/env zsh

# Extract specific date
./extract_todos.sh now.txt.md --date 2025-12-27

# Extract only new dates (default)
./extract_todos.sh now.txt.md

# Extract everything
./extract_todos.sh now.txt.md --all

# Reset state
./extract_todos.sh now.txt.md --reset
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The script identifies dates in &lt;code&gt;YYYY-MM-DD&lt;/code&gt; format and extracts everything under each date until the next date appears. So if I have a note structured like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-markdown&quot;&gt;## Tasks

2025-12-11

- [x] Completed task
- [ ] Incomplete task

2025-12-22

- [ ] Another task
  &amp;gt; Additional context
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The script tracks which dates have been processed in a &lt;code&gt;.processed&lt;/code&gt; file to avoid duplicates. The first time I run it, it processes all dates. The second time, it only processes new dates and won&amp;#39;t reprocess the dates from December 11th and 22nd:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-zsh&quot;&gt;# First run: processes all dates
./extract_todos.sh now.txt.md | gh copilot suggest

# Second run: only processes new dates
./extract_todos.sh now.txt.md | gh copilot suggest
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If I need to reprocess an old date, I can use the &lt;code&gt;--date&lt;/code&gt; flag:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-zsh&quot;&gt;./extract_todos.sh now.txt.md --date 2025-12-11
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;What Didn&amp;#39;t Work&lt;/h2&gt;
&lt;p&gt;My first attempt didn&amp;#39;t go too well. I only extracted incomplete tasks and filtered out completed ones along with their notes. I quickly realized this wasn&amp;#39;t giving me the full picture. I wanted the complete context of each day, including what I&amp;#39;d accomplished and any notes I&amp;#39;d attached to tasks. The incomplete tasks by themselves didn&amp;#39;t tell the whole story.&lt;/p&gt;
&lt;h2&gt;What Did Work&lt;/h2&gt;
&lt;p&gt;The final approach extracts entire date sections while maintaining a simple state file. The script uses AWK to parse the markdown structure:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-zsh&quot;&gt;awk &amp;#39;
/^## Tasks/ { in_tasks=1; next }
/^## / &amp;amp;&amp;amp; in_tasks { exit }
in_tasks &amp;amp;&amp;amp; /^[0-9]{4}-[0-9]{2}-[0-9]{2}/ {
    if (current_date &amp;amp;&amp;amp; content) {
        print current_date
        print content
    }
    current_date=$0
    content=&amp;quot;&amp;quot;
    next
}
in_tasks &amp;amp;&amp;amp; current_date {
    if (content) content = content &amp;quot;\n&amp;quot; $0
    else content = $0
}
&amp;#39; now.txt.md
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now my workflow feels much more natural. I jot down ideas in &lt;code&gt;now.txt.md&lt;/code&gt; throughout the day, run the extraction script when I&amp;#39;m ready to create tickets, and only the new date sections get sent to GitHub Copilot CLI. No more duplicate tickets, much lower API costs, and I can still go back and reprocess specific dates when I need to. The friction between capturing an idea and having it tracked properly is almost gone. 4. Tickets are created without duplicates 5. API costs stay minimal&lt;/p&gt;
&lt;p&gt;The script respects my note format (which I didn&amp;#39;t have to change), tracks state automatically, and gives me control when I need to reprocess specific dates. Simple, effective, and cost-efficient.&lt;/p&gt;
</content:encoded></item><item><title>jnix</title><link>https://blog.thein3rovert.dev/posts/nixos/jnix/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/jnix/</guid><description>A Just-based CLI Tool for NixOS</description><pubDate>Fri, 07 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A few nights ago, I was going through a git repository when I came across a CLI program called &lt;code&gt;njust&lt;/code&gt;. Like &lt;code&gt;jnix&lt;/code&gt;, it&amp;#39;s a command-line tool built using &lt;code&gt;just&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Just is a command-line runner similar to &lt;code&gt;make&lt;/code&gt;. Make is a command-line tool similar to &lt;code&gt;just&lt;/code&gt; 🤣. Just kidding, it never ends...haha.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Make&lt;/strong&gt; is a &lt;a href=&quot;https://en.wikipedia.org/wiki/Command-line_interface&quot;&gt;command-line interface&lt;/a&gt; &lt;a href=&quot;https://en.wikipedia.org/wiki/Software_tool&quot;&gt;software tool&lt;/a&gt; that performs actions ordered by configured &lt;a href=&quot;https://en.wikipedia.org/wiki/Dependence_analysis&quot;&gt;dependencies&lt;/a&gt; as defined in a &lt;a href=&quot;https://en.wikipedia.org/wiki/Configuration_file&quot;&gt;configuration file&lt;/a&gt; called a &lt;em&gt;makefile&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Commands run using Just are called &lt;code&gt;recipes&lt;/code&gt; and are usually stored in a file called &lt;code&gt;Justfile&lt;/code&gt; that has a similar syntax to &lt;code&gt;make&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;In order to run a command, we can simply run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;just &amp;lt;RECIPE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are plenty of use cases for using this tool, but in my case I just wanted to experiment with it and I wanted a way to use it to run some common commands I use daily on my NixOS homelab like for example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Rebuilding my NixOS config&lt;/li&gt;
&lt;li&gt;Deploying config changes to remote hosts&lt;/li&gt;
&lt;li&gt;Sending files over to remote hosts&lt;/li&gt;
&lt;li&gt;Garbage cleaning my system&lt;/li&gt;
&lt;li&gt;and more.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Obviously there are commands that do these things already, but I just think it&amp;#39;d be cool to have my own CLI tool that acts as a wrapper around these commands.&lt;/p&gt;
&lt;h2&gt;The Nix Way&lt;/h2&gt;
&lt;p&gt;The simpler way to do this is to create a Justfile anywhere inside a folder, but I decided to do it the Nix way... being that it will be created as a module and can be shared between all my hosts.&lt;/p&gt;
&lt;p&gt;Also, Nix offers a function called &lt;code&gt;writeShellApplication&lt;/code&gt; which is a &lt;strong&gt;higher-level helper&lt;/strong&gt; used for creating a &lt;strong&gt;shell script or CLI program&lt;/strong&gt; from a bash script. We can also specify the dependencies needed by these scripts. Nix pulls the dependencies from the Nix repository during runtime and creates a Nix derivation with it alongside the script, which in this case is our Justfile.&lt;/p&gt;
&lt;h3&gt;Merging Recipes&lt;/h3&gt;
&lt;p&gt;First, I created a Nix variable called &lt;code&gt;mergeContentIntoJustFile&lt;/code&gt;, responsible for concatenating all the recipes (Just commands) that I will be creating and merging them into a Justfile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;mergeContentIntoJustFile = &amp;#39;&amp;#39;
  _default:
    @printf &amp;#39;\033[1;36mjnix\033[0m\n&amp;#39;
    @printf &amp;#39;Just-based recipe runner for NixOS.\n\n&amp;#39;
    @printf &amp;#39;\033[1;33mUsage:\033[0m jnix &amp;lt;recipe&amp;gt; [args...]\n\n&amp;#39;
    @jnix --list --list-heading $&amp;#39;Available recipes:\n\n&amp;#39;

  ${concatenateString &amp;quot;\n&amp;quot; (attributeValues cfg.recipes)}
&amp;#39;&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then comes a new variable for validating the Justfile syntax. This validation will be done during build time so I can catch errors early. Saves me time from having to debug after the build.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;validatedJustfile =
  pkgs.runCommand &amp;quot;jnix-justfile-validated&amp;quot;
    {
      nativeBuildInputs = [ pkgs.just ];
      preferLocalBuild = true;
    }
    &amp;#39;&amp;#39;
      # Write the merged justfile content to a temporary file
      echo ${escapeShellArgument mergeContentIntoJustFile} &amp;gt; justfile

      # Validate justfile syntax using &amp;#39;just --summary&amp;#39;
      echo &amp;quot;Validating jnix cli justfile syntax...&amp;quot;
      just --justfile justfile --summary &amp;gt;/dev/null || {
          echo &amp;quot;ERROR: jnix justfile has syntax errors!&amp;quot;
          echo &amp;quot;justfile content:&amp;quot;
          cat justfile
          exit 1
        }
      # Copy validated justfile to the nix store output path
      cp justfile $out
      echo &amp;quot;jnix justfile validation successful&amp;quot;
    &amp;#39;&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Nix also offers a function called &lt;code&gt;pkgs.runCommand&lt;/code&gt; which is a low-level Nix function for creating a derivation by running arbitrary shell commands. You give it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A name for the derivation&lt;/li&gt;
&lt;li&gt;A set of environment variables&lt;/li&gt;
&lt;li&gt;A shell script to run&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So we will pass into the function a Just package as a dependency which it will use to validate the syntax during runtime.&lt;/p&gt;
&lt;h3&gt;Creating the CLI Application&lt;/h3&gt;
&lt;p&gt;Using the &lt;code&gt;pkgs.writeShellApplication&lt;/code&gt; function, I created a shell application used with the merged Justfile containing all the gathered recipes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;jnixScript = pkgs.writeShellApplication {
  name = &amp;quot;jnix&amp;quot;;
  runtimeInputs = [
    pkgs.jq
    pkgs.just
  ];
  text = &amp;#39;&amp;#39;
    # Execute &amp;#39;just&amp;#39; with the merged justfile, preserving current directory
    exec just --working-directory &amp;quot;$PWD&amp;quot; --justfile ${mergedJustfile} &amp;quot;$@&amp;quot;
  &amp;#39;&amp;#39;;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Example Recipe&lt;/h2&gt;
&lt;p&gt;Here is an example of a recipe I have in my created Justfile:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;system = &amp;#39;&amp;#39;
  # Show system info
  [group(&amp;#39;system&amp;#39;)]
  info:
    @echo &amp;quot;Hostname: $(hostname)&amp;quot;
    @echo &amp;quot;Nixos Version: $(nixos-version)&amp;quot;
    @echo &amp;quot;Kernel: $(uname -r)&amp;quot;
    @echo &amp;quot;Generation: $(sudo nix-env --list-generations -p /nix/var/nix/profiles/system | tail -1 | awk &amp;#39;{print $1}&amp;#39;)&amp;quot;
    @echo &amp;quot;Revison: $(nixos-version --json | jq -r &amp;#39;.configurationRevision // &amp;quot;unknown&amp;quot;&amp;#39;)&amp;quot;
&amp;#39;&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&amp;#39;s used to check for Nix system information when I run the command &lt;code&gt;jnix info&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; jnix info
Hostname: nixos
Nixos Version: 25.11.20250924.e643668 (Xantusia)
Kernel: 6.12.48
Generation: 74
Revison: unknown
&lt;/code&gt;&lt;/pre&gt;
</content:encoded></item><item><title>Taking back my attention</title><link>https://blog.thein3rovert.dev/posts/nixos/taking-back-my-attention/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/taking-back-my-attention/</guid><description>How RSS Helped Me Detox from Social Media</description><pubDate>Sat, 01 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A few months ago, I set up an RSS feed while going through a period of &lt;em&gt;intentional consumption&lt;/em&gt; and a &lt;em&gt;social media detox&lt;/em&gt;.&lt;br&gt;I don’t even remember exactly what drove me to it, but I’m sure I was simply fed up.. tired of how drained and sad I felt after just 30 minutes of scrolling through social media.&lt;/p&gt;
&lt;p&gt;Before I talk about how RSS changed my habits, here’s a quick glimpse of what my mornings used to look like.&lt;/p&gt;
&lt;h2&gt;Before the Detox&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;6:00 a.m.&lt;/strong&gt;&lt;br&gt;Wake up. Turn off my alarm. Navigate to the clock app to silence the next few alarms..because they love to scream while I’m still in the bathroom, soaked in bubbles and soap 😂.&lt;/p&gt;
&lt;p&gt;And right there, shining on my lock screen: a notification.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Someone just liked your photo.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Of course, I tap it, just to see which post it was. Somehow, ten minutes later, I’ve watched 20 reels. I can’t even remember the last 15, but I know I laughed, cried, and felt sad,  all in that short time.&lt;/p&gt;
&lt;p&gt;Ten minutes, and I’ve gone through a rollercoaster of emotions.&lt;br&gt;That’s not normal. It’s like my brain’s being controlled by a remote...and the remote is the Reels algorithm.&lt;/p&gt;
&lt;h2&gt;Discovering RSS&lt;/h2&gt;
&lt;p&gt;I wanted to take back control of my attention, you know.. to &lt;em&gt;be my own remote&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;So I started researching healthier ways to consume content. That’s when I came across &lt;strong&gt;RSS&lt;/strong&gt;.&lt;br&gt;It’s been around for decades, though not many people use it anymore. Still, it was exactly what I needed.&lt;/p&gt;
&lt;p&gt;I gathered blogs, newsletters, and creators I genuinely enjoy, added them to my feed, and started reading them whenever I felt the urge to scroll.&lt;br&gt;The result? I began to &lt;em&gt;enjoy&lt;/em&gt; reading again, without the noise.&lt;br&gt;Over time, I noticed real changes in myself:&lt;br&gt;My attention span improved.&lt;br&gt;I became more patient.&lt;br&gt;My days were a bit more boring, but a lot less sad.&lt;br&gt;I felt… &lt;strong&gt;human&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Maybe I’ll write a separate post on the full benefits of a social media detox later, but for now, I want to share how I organized my RSS feed...and why you should too.&lt;/p&gt;
&lt;h2&gt;Setting Up My RSS Feed&lt;/h2&gt;
&lt;p&gt;I started by identifying my main interests: &lt;strong&gt;technology&lt;/strong&gt;, &lt;strong&gt;philosophy&lt;/strong&gt;, &lt;strong&gt;academia&lt;/strong&gt;, &lt;strong&gt;photography&lt;/strong&gt;, &lt;strong&gt;podcasts&lt;/strong&gt;, and a few others.&lt;/p&gt;
&lt;p&gt;One thing I quickly learned is not to add &lt;em&gt;too many&lt;/em&gt; sources.&lt;br&gt;Even though RSS gives you full control, too much content can make you indecisive again.&lt;br&gt;So I kept my feed simple and intentional.&lt;/p&gt;
&lt;p&gt;Here’s how I categorized it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Technology&lt;/strong&gt; → YouTube, Reddit, Medium, tech blogs&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Philosophy&lt;/strong&gt; → Email newsletters, personal blogs&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Photography&lt;/strong&gt; → Favorite creators’ sites and photo journals&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Podcasts&lt;/strong&gt; → RSS-enabled audio feeds&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Under &lt;strong&gt;Technology&lt;/strong&gt;, for example, I looked through my YouTube history and wrote down the creators I truly enjoy. I tagged each one so I could easily organize them later.&lt;/p&gt;
&lt;p&gt;For blogs, many of my favorites already had RSS feeds... so instead of saving them in my &lt;code&gt;linkding&lt;/code&gt; bookmarks, I added them directly to RSS. That way, I’d know the moment they published something new.&lt;/p&gt;
&lt;p&gt;I’m also working on a separate post where I’ll share the blogs, creators, and newsletters I follow..so if you find any interesting, you can add them to your feed too.&lt;/p&gt;
&lt;p&gt;Most newsletters go straight to email... which means they either get lost in spam or buried under other messages.&lt;br&gt;And since I’ve turned off email notifications (they can be &lt;em&gt;very&lt;/em&gt; distracting), I use a tool called &lt;a href=&quot;https://kill-the-newsletter.com/&quot;&gt;&lt;strong&gt;Kill the Newsletter&lt;/strong&gt;&lt;/a&gt;.&lt;br&gt;It gives you a unique email address to subscribe to newsletters. Every email sent there is converted into an Atom feed that you can add to your RSS reader.&lt;br&gt;It’s simple, and it keeps your inbox clean.&lt;/p&gt;
&lt;p&gt;Within my categories, I use labels for my interests: &lt;strong&gt;infrastructure&lt;/strong&gt;, &lt;strong&gt;homelab&lt;/strong&gt;, &lt;strong&gt;Nix&lt;/strong&gt;, &lt;strong&gt;NixOS&lt;/strong&gt;, &lt;strong&gt;web development&lt;/strong&gt;, and &lt;strong&gt;networking&lt;/strong&gt;.&lt;br&gt;That way, every item, whether it’s a video, blog, podcast, or newsletter, can be filtered by topic whenever I want.&lt;/p&gt;
&lt;p&gt;It’s like having a calm, curated version of the internet that &lt;em&gt;I&lt;/em&gt; control.&lt;/p&gt;
&lt;h2&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;After setting everything up, I can confidently say:&lt;br&gt;The remote to my attention and consumption is finally in my hands.. for now.&lt;/p&gt;
&lt;p&gt;If something new comes along, I’ll adapt and share the journey here.&lt;/p&gt;
&lt;p&gt;Until then, stay intentional.&lt;br&gt;&lt;strong&gt;Bye for now!&lt;/strong&gt;&lt;/p&gt;
</content:encoded></item><item><title>Clan</title><link>https://blog.thein3rovert.dev/posts/nixos/integrating-clan-into-my-nixos-homelab---a-tinkering-journey/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/integrating-clan-into-my-nixos-homelab---a-tinkering-journey/</guid><description>Discovering Clan at NixCon 2025</description><pubDate>Mon, 20 Oct 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;Background: Discovering Clan at NixCon 2025&lt;/h2&gt;
&lt;p&gt;Last month, I had the opportunity to attend NixCon 2025 online (unfortunately couldn&amp;#39;t make it in person). It was an incredible experience learning about the latest developments in the Nix ecosystem. One of the talks that particularly caught my attention was about &lt;strong&gt;Clan&lt;/strong&gt; - a tool for managing NixOS machines declaratively - presented by Qubae and Kenji Berthold.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;./images/clan.png&quot; alt=&quot;Clan infrastructure management tool&quot;&gt;&lt;/p&gt;
&lt;p&gt;You can watch their presentation here: &lt;a href=&quot;https://youtu.be/wwOEKMB0HQk?si=H4vFHmxs6ysETMMy&quot;&gt;NixCon 2025 - Clan Talk&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The talk really resonated with me. I&amp;#39;ve been managing my homelab infrastructure with NixOS for a while now, using Colmena for deployments, but I was intrigued by Clan&amp;#39;s approach to machine management. While I wasn&amp;#39;t planning to immediately migrate my entire infrastructure, the idea of being able to explore new tools and potentially improve my setup was too tempting to resist.&lt;/p&gt;
&lt;p&gt;Today, I decided to take Clan for a spin - just for tinkering purposes, really. This blog post documents that journey, including the bumps along the way.&lt;/p&gt;
&lt;h2&gt;The Original Plan: Testing on Demo&lt;/h2&gt;
&lt;p&gt;My initial plan was simple: integrate Clan into my &lt;code&gt;demo&lt;/code&gt; host as a test case. This seemed like the perfect candidate - it&amp;#39;s a non-critical machine in my homelab specifically set up for experimentation.&lt;/p&gt;
&lt;p&gt;However, reality had other plans. When I went to check on the demo host, I discovered it had been down for months due to an SSD failure I&amp;#39;d forgotten about. That old Kingston drive finally gave up the ghost sometime back, and I hadn&amp;#39;t gotten around to replacing it.&lt;/p&gt;
&lt;p&gt;So, pivot time.&lt;/p&gt;
&lt;h2&gt;Plan B: Meet Octavia&lt;/h2&gt;
&lt;p&gt;Enter &lt;strong&gt;Octavia&lt;/strong&gt; - a new node I recently spun up as a test server. Fresh installation, working hardware, and perfect for experimentation. Octavia would be my guinea pig for Clan integration.&lt;/p&gt;
&lt;h2&gt;My Existing Flake Structure&lt;/h2&gt;
&lt;p&gt;Before diving into Clan, let me show you what my infrastructure looked like. I&amp;#39;m managing 5 NixOS hosts using a flake-based setup with flake-parts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;nixos&lt;/code&gt; - My main workstation&lt;/li&gt;
&lt;li&gt;&lt;code&gt;demo&lt;/code&gt; - Test server (currently deceased)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vps-het-1&lt;/code&gt; - Production VPS&lt;/li&gt;
&lt;li&gt;&lt;code&gt;wellsjaha&lt;/code&gt; - Prod environment&lt;/li&gt;
&lt;li&gt;&lt;code&gt;octavia&lt;/code&gt; - New Prod server (our hero today)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here&amp;#39;s the relevant part of my original &lt;code&gt;flake.nix&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{
  description = &amp;#39;&amp;#39;
    This is a configuration for managing multiple nixos machines
  &amp;#39;&amp;#39;;

  inputs = {
    home-manager = {
      url = &amp;quot;github:nix-community/home-manager&amp;quot;;
      inputs.nixpkgs.follows = &amp;quot;nixpkgs&amp;quot;;
    };

  outputs = { self, nixpkgs, flake-parts, ... }@inputs:
    flake-parts.lib.mkFlake { inherit inputs; } {
      systems = [ &amp;quot;aarch64-linux&amp;quot; &amp;quot;x86_64-linux&amp;quot; &amp;quot;aarch64-darwin&amp;quot; &amp;quot;x86_64-darwin&amp;quot; ];

      imports = [ ./modules/flake ];

      flake = let
        forAllLinuxHosts = self.inputs.nixpkgs.lib.genAttrs [
          &amp;quot;nixos&amp;quot; &amp;quot;demo&amp;quot; &amp;quot;vps-het-1&amp;quot; &amp;quot;wellsjaha&amp;quot; &amp;quot;octavia&amp;quot;
        ];
      in {
        nixosConfigurations = forAllLinuxHosts (
          host: self.inputs.nixpkgs.lib.nixosSystem {
            specialArgs = { inherit self inputs; };
            modules = [
              ./hosts/${host}
              # ... various modules
            ];
          }
        );

        # Colmena deployment configuration
        colmenaHive = colmena.lib.makeHive {
          # ... deployment configs for all hosts
        };
      };
    };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This setup worked well. I could deploy using Colmena, rebuild individual machines with &lt;code&gt;nixos-rebuild&lt;/code&gt;, and everything was nicely organized. But I wanted to see what Clan could bring to the table.&lt;/p&gt;
&lt;h2&gt;Adding Clan: The Integration Process&lt;/h2&gt;
&lt;h3&gt;Step 1: Adding the Clan Input&lt;/h3&gt;
&lt;p&gt;First, I needed to add &lt;code&gt;clan-core&lt;/code&gt; as a flake input:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;inputs = {
  # ... existing inputs ...

  clan-core = {
    url = &amp;quot;https://git.clan.lol/clan/clan-core/archive/main.tar.gz&amp;quot;;
    inputs.nixpkgs.follows = &amp;quot;nixpkgs&amp;quot;;
  };
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 2: Creating the Clan Configuration&lt;/h3&gt;
&lt;p&gt;The key insight here was that I didn&amp;#39;t want to convert all my hosts to Clan immediately - just Octavia for testing. So I created a Clan configuration that only included one machine:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;clan = clan-core.lib.clan {
  self = self;
  specialArgs = {
    inherit self inputs nix-colors colmena nixpkgs-unstable-small;
  };
  meta.name = &amp;quot;octavia-clan-test&amp;quot;;

  machines = {
    octavia = {
      nixpkgs.hostPlatform = &amp;quot;x86_64-linux&amp;quot;;
      imports = [
        ./hosts/octavia
        # Clan networking configuration
        {
          clan.core.networking.targetHost = &amp;quot;10.20.0.2&amp;quot;;
        }
      ];
    };
  };
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Step 3: Merging Configurations&lt;/h3&gt;
&lt;p&gt;Here&amp;#39;s the clever bit: I kept my existing &lt;code&gt;nixosConfigurations&lt;/code&gt; for all hosts, then &lt;strong&gt;overrode&lt;/strong&gt; just Octavia with the Clan version:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;nixosConfigurations =
  (forAllLinuxHosts (
    host: self.inputs.nixpkgs.lib.nixosSystem {
      # ... regular configuration ...
    }
  ))
  // {
    # Override octavia with Clan configuration
    octavia = clan.config.nixosConfigurations.octavia;
  };

# Expose Clan outputs
inherit (clan.config) clanInternals;
clan = clan.config;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This approach meant:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;My other 4 hosts remained completely unchanged&lt;/li&gt;
&lt;li&gt;Octavia got the Clan treatment&lt;/li&gt;
&lt;li&gt;I could still use Colmena for the non-Clan hosts&lt;/li&gt;
&lt;li&gt;Everything coexisted peacefully&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Step 4: Adding the Clan CLI&lt;/h3&gt;
&lt;p&gt;To interact with Clan, I enabled my development shell and added the CLI:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;devShells = forAllSystems ({ pkgs }: {
  default = pkgs.mkShell {
    packages = [
      # ... existing tools ...
      clan-core.packages.${pkgs.system}.clan-cli
    ];
  };
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The Bumps in the Road&lt;/h2&gt;
&lt;p&gt;Of course, it wasn&amp;#39;t all smooth sailing. I hit a couple of issues that needed fixing.&lt;/p&gt;
&lt;h3&gt;Issue 1: Duplicate Disko Imports&lt;/h3&gt;
&lt;p&gt;My first attempt included all module imports in the Clan configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;imports = [
  ./hosts/octavia
  agenix.nixosModules.default
  inputs.disko.nixosModules.disko  # ← This was the problem!
  self.nixosModules.users
  # ... etc
];
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This caused an error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;error: The option `_module.args.diskoLib&amp;#39; is defined multiple times
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The issue? My &lt;code&gt;./hosts/octavia&lt;/code&gt; configuration already imported disko! I was importing it twice, causing a conflict. The fix was simple - just import the host configuration and let it handle the rest:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;imports = [
  ./hosts/octavia
  {
    clan.core.networking.targetHost = &amp;quot;10.20.0.2&amp;quot;;
  }
];
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Issue 2: Missing Network Interface&lt;/h3&gt;
&lt;p&gt;The second issue was more subtle:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;error: Failed assertions:
- networking.defaultGateway.interface is not optional when using networkd.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In my Octavia configuration, I had:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;networking.defaultGateway = &amp;quot;10.20.0.254&amp;quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But when using systemd-networkd (which Clan expects), you need to specify the interface:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;networking.defaultGateway = {
  address = &amp;quot;10.20.0.254&amp;quot;;
  interface = &amp;quot;enp1s0&amp;quot;;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I added this as an override in the Clan configuration for now, but the proper fix is to update the host configuration directly.&lt;/p&gt;
&lt;h2&gt;The Final Result&lt;/h2&gt;
&lt;p&gt;After working through these issues, here&amp;#39;s my complete integrated flake:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{
  description = &amp;#39;&amp;#39;
    This is a configuration for managing multiple nixos machines
  &amp;#39;&amp;#39;;

  inputs = {
    home-manager = {
      url = &amp;quot;github:nix-community/home-manager&amp;quot;;
      inputs.nixpkgs.follows = &amp;quot;nixpkgs&amp;quot;;
    };
    clan-core = {
      url = &amp;quot;https://git.clan.lol/clan/clan-core/archive/main.tar.gz&amp;quot;;
      inputs.nixpkgs.follows = &amp;quot;nixpkgs&amp;quot;;
    };
  };

  outputs = {
    self,
    home-manager,
    nixpkgs,
    flake-parts,
    nixpkgs-unstable-small,
    clan-core,
    ...
  }@inputs:
    flake-parts.lib.mkFlake { inherit inputs; } {
      systems = [
        &amp;quot;aarch64-linux&amp;quot;
        &amp;quot;x86_64-linux&amp;quot;
        &amp;quot;aarch64-darwin&amp;quot;
        &amp;quot;x86_64-darwin&amp;quot;
      ];

      imports = [ ./modules/flake ];

      flake = let
        inherit (self) outputs;

        allSystems = [
          &amp;quot;aarch64-linux&amp;quot;
          &amp;quot;x86_64-linux&amp;quot;
          &amp;quot;aarch64-darwin&amp;quot;
          &amp;quot;x86_64-darwin&amp;quot;
        ];

        forAllSystems = f:
          self.inputs.nixpkgs.lib.genAttrs allSystems (
            system: f {
              pkgs = import self.inputs.nixpkgs {
                inherit system;
                config.allowUnfree = true;
              };
            }
          );

        forAllLinuxHosts = self.inputs.nixpkgs.lib.genAttrs [
          &amp;quot;nixos&amp;quot;
          &amp;quot;demo&amp;quot;
          &amp;quot;vps-het-1&amp;quot;
          &amp;quot;wellsjaha&amp;quot;
          &amp;quot;octavia&amp;quot;
        ];

        # Clan configuration for Octavia only
        clan = clan-core.lib.clan {
          self = self;
          specialArgs = {
            inherit self inputs nix-colors colmena nixpkgs-unstable-small;
          };
          meta.name = &amp;quot;octavia-clan-test&amp;quot;;

          machines = {
            octavia = {
              nixpkgs.hostPlatform = &amp;quot;x86_64-linux&amp;quot;;
              imports = [
                ./hosts/octavia
                {
                  clan.core.networking.targetHost = &amp;quot;10.20.0.2&amp;quot;;
                }
                {
                  networking.defaultGateway = {
                    address = &amp;quot;10.20.0.254&amp;quot;;
                    interface = &amp;quot;enp1s0&amp;quot;;
                  };
                }
              ];
            };
          };
        };
      in {
        # NixOS Configurations - regular for most, Clan for Octavia
        nixosConfigurations =
          (forAllLinuxHosts (
            host: self.inputs.nixpkgs.lib.nixosSystem {
              specialArgs = {
                inherit self inputs nix-colors colmena nixpkgs-unstable-small;
              };
              modules = [
                ./hosts/${host}
                self.inputs.home-manager.nixosModules.home-manager
                agenix.nixosModules.default
                inputs.disko.nixosModules.disko
                self.nixosModules.users
                self.nixosModules.nixosOs
                self.nixosModules.hardware
                self.nixosModules.core
                self.nixosModules.containers
                {
                  nixpkgs.overlays = [ self.overlays.default ];
                }
                {
                  environment.systemPackages = [
                    ghostty.packages.x86_64-linux.default
                  ];
                }
                {
                  home-manager = {
                    backupFileExtension = &amp;quot;backup&amp;quot;;
                    extraSpecialArgs = { inherit self; };
                    useGlobalPkgs = true;
                    useUserPackages = true;
                  };
                  nixpkgs.config.allowUnfree = true;
                }
              ];
            }
          ))
          // {
            # Override octavia with Clan configuration
            octavia = clan.config.nixosConfigurations.octavia;
          };

        # Expose Clan outputs
        inherit (clan.config) clanInternals;
        clan = clan.config;

        # Development shell with Clan CLI
        devShells = forAllSystems ({ pkgs }: {
          default = pkgs.mkShell {
            packages = (with pkgs; [
              alejandra
              nixd
              nil
              bash-language-server
              shellcheck
              shfmt
              nix-update
              git
              ripgrep
              sd
              fd
              pv
              fzf
              bat
              nmap
              python3
              python3Packages.wcwidth
            ]) ++ [
              self.inputs.agenix.packages.${pkgs.system}.default
              clan-core.packages.${pkgs.system}.clan-cli
            ];
          };
        });

        # NixOS Modules
        nixosModules = {
          users = ./modules/nixos/users;
          nixosOs = ./modules/nixos/os;
          locale-en-uk = ./modules/nixos/locale/en-uk;
          hardware = ./modules/hardware;
          core = ./modules/core;
          containers = ./modules/nixos/containers;
        };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Testing It Out&lt;/h2&gt;
&lt;p&gt;With everything configured, it was time to test:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;# Update flake inputs
nix flake update

# Verify the flake structure
nix flake show

# Enter development shell
nix develop

# List Clan machines
clan machines list
# Output: octavia

# Deploy to Octavia using Clan
clan machines update octavia
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And it worked! Octavia was now being managed through Clan while my other hosts continued to work normally with their existing setup.&lt;/p&gt;
&lt;h2&gt;Reflections and Next Steps&lt;/h2&gt;
&lt;p&gt;This experiment was exactly what I hoped for - a chance to explore Clan without disrupting my existing infrastructure. Here are my key takeaways:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Incremental adoption is possible&lt;/strong&gt; - You can integrate Clan alongside existing tools&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Configuration coexistence&lt;/strong&gt; - Clan and traditional NixOS configs can live together&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Good documentation&lt;/strong&gt; - The Clan docs were helpful for troubleshooting&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Resources&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.clan.lol/&quot;&gt;Clan Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://youtu.be/wwOEKMB0HQk?si=H4vFHmxs6ysETMMy&quot;&gt;NixCon 2025 Clan Talk&lt;/a&gt; by Qubae and Kenji Berthold&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://git.clan.lol/clan/clan-core&quot;&gt;Clan Core GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.clan.lol/getting-started/convert/&quot;&gt;Converting Existing NixOS Configurations Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>From Simple to Modular</title><link>https://blog.thein3rovert.dev/posts/nixos/from-simple-to-modular---nginx/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/from-simple-to-modular---nginx/</guid><description>Learning nix through nginx configuration</description><pubDate>Sat, 18 Oct 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h1&gt;My NixOS Nginx Configuration Journey&lt;/h1&gt;
&lt;p&gt;When I first started configuring Nginx on NixOS, I did what most people do. I followed the &lt;a href=&quot;https://nixos.wiki/wiki/Nginx&quot;&gt;NixOS Wiki&lt;/a&gt; and created a straightforward, working configuration. It served my needs perfectly... until I realized I had multiple hosts to manage, and copying the same configuration with minor tweaks wasn&amp;#39;t just tedious—it felt wrong in the declarative world of Nix.&lt;/p&gt;
&lt;p&gt;This is the story of how I transformed a simple Nginx configuration into a reusable, modular system. It&amp;#39;s a journey that taught me more about Nix than any tutorial could, and if you&amp;#39;re learning NixOS, I hope it helps you too.&lt;/p&gt;
&lt;h2&gt;The Beginning: A Simple Nginx Setup&lt;/h2&gt;
&lt;p&gt;My initial configuration was straightforward and looked something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{ config, lib, ... }:
let
  inherit (lib) mkIf mkEnableOption;
  if-nginx-enable = mkIf config.nixosSetup.services.nginx.enable;
in
{
  options.nixosSetup.services.nginx = {
    enable = mkEnableOption &amp;quot;Nginx Server&amp;quot;;
  };

  config = if-nginx-enable {
    services.nginx = {
      enable = true;
      virtualHosts.&amp;quot;localhost&amp;quot; = {
        root = &amp;quot;/var/www/localhost&amp;quot;;
        listen = [
          {
            addr = &amp;quot;0.0.0.0&amp;quot;;
            port = 80;
          }
        ];
        locations.&amp;quot;/&amp;quot; = {
          index = &amp;quot;index.html&amp;quot;;
        };
      };
    };

    systemd.tmpfiles.rules = [
      &amp;quot;d /var/www/localhost 0755 root root -&amp;quot;
      &amp;quot;L+ /var/www/localhost/index.html - - - - ${builtins.toFile &amp;quot;index.html&amp;quot; &amp;#39;&amp;#39;
        &amp;lt;!DOCTYPE html&amp;gt;
        &amp;lt;html&amp;gt;
          &amp;lt;head&amp;gt;&amp;lt;title&amp;gt;Hello from thein3rovert&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
          &amp;lt;body&amp;gt;
            &amp;lt;h1&amp;gt;Hello from the in3rovert Nginx on Nixos!&amp;lt;/h1&amp;gt;
            &amp;lt;p&amp;gt;Served from a nixos declarative config 😎&amp;lt;/p&amp;gt;
          &amp;lt;/body&amp;gt;
        &amp;lt;/html&amp;gt;
      &amp;#39;&amp;#39;}&amp;quot;
    ];
  };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This worked beautifully! I had a basic enable option, a hardcoded virtual host, and a simple HTML page served from the Nix store. Everything was declarative, and I felt pretty good about it.&lt;br&gt;![[From Simple to Modular - Nginx-1760748694607.png]]&lt;/p&gt;
&lt;h3&gt;The Nix Store Symlink Discovery&lt;/h3&gt;
&lt;p&gt;One interesting challenge I encountered early on was getting the HTML content to actually appear in &lt;code&gt;/var/www/localhost/index.html&lt;/code&gt;. Initially, I tried using &lt;code&gt;C!&lt;/code&gt; (force copy) with systemd-tmpfiles, but I kept getting a file that just contained the Nix store path instead of the actual HTML content.&lt;/p&gt;
&lt;p&gt;The solution? Use &lt;code&gt;L+&lt;/code&gt; instead, which creates a symlink to the Nix store. This is actually more &amp;quot;Nix-like&amp;quot; anyway—instead of copying files around, we point to immutable content in the store. It&amp;#39;s elegant and exactly how NixOS is meant to work.&lt;/p&gt;
&lt;h2&gt;The &amp;quot;Wait, I Need This Everywhere&amp;quot; Moment&lt;/h2&gt;
&lt;p&gt;Then came the moment every NixOS user experiences: I needed to configure Nginx on another host. And then another. Each time, I found myself copying the configuration and manually changing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The server name&lt;/li&gt;
&lt;li&gt;The IP addresses&lt;/li&gt;
&lt;li&gt;The ports&lt;/li&gt;
&lt;li&gt;The HTML content&lt;/li&gt;
&lt;li&gt;The root directory&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It felt... wrong. I was repeating myself, and if there&amp;#39;s one thing Nix teaches you, it&amp;#39;s that repetition is a code smell. I could already feel the maintenance burden building up. What if I wanted to add SSL to all hosts? What if I needed to change the default port? I&amp;#39;d be hunting through multiple files making the same change over and over.&lt;/p&gt;
&lt;p&gt;Plus, let&amp;#39;s be honest—sometimes I can just be a bit crazy about optimization and making things &amp;quot;proper.&amp;quot; 😄&lt;/p&gt;
&lt;h2&gt;The Transformation: Building a Reusable Module&lt;/h2&gt;
&lt;p&gt;I decided to take the plunge and create a proper, reusable module. This wasn&amp;#39;t just about solving my immediate problem—it was about learning Nix more deeply. Here&amp;#39;s what I wanted to achieve:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Multiple virtual hosts&lt;/strong&gt;: Support any number of sites per server&lt;br&gt;&lt;strong&gt;Flexible configuration&lt;/strong&gt;: Make everything configurable with sensible defaults&lt;br&gt;&lt;strong&gt;Type safety&lt;/strong&gt;: Leverage Nix&amp;#39;s type system to catch errors early&lt;br&gt;&lt;strong&gt;Reusability&lt;/strong&gt;: Share the module across all my hosts with minimal duplication&lt;/p&gt;
&lt;h3&gt;Understanding the Building Blocks&lt;/h3&gt;
&lt;p&gt;First, I needed to understand the NixOS module system better. Here are the key concepts I learned:&lt;/p&gt;
&lt;h4&gt;Options Define the Interface&lt;/h4&gt;
&lt;p&gt;Options are how you expose configuration to users (including future you). Each option needs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;type&lt;/strong&gt; - What kind of data it accepts&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;default&lt;/strong&gt; - A sensible fallback value&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;description&lt;/strong&gt; - Documentation for what it does&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Submodules for Complex Structures&lt;/h4&gt;
&lt;p&gt;When you need nested configuration (like listen addresses within a virtual host), you use submodules. They&amp;#39;re like mini-modules within your module.&lt;/p&gt;
&lt;h4&gt;mapAttrs for Transformation&lt;/h4&gt;
&lt;p&gt;The &lt;code&gt;mapAttrs&lt;/code&gt; function is your friend for transforming your custom options into the format that &lt;code&gt;services.nginx&lt;/code&gt; expects.&lt;/p&gt;
&lt;h3&gt;The Implementation&lt;/h3&gt;
&lt;p&gt;Here&amp;#39;s the modular configuration I built:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{ config, lib, ... }:
let
  inherit (lib)
    mkIf
    mkEnableOption
    types
    mkOption
    ;

  createOption = mkOption;
  mapAttribute = lib.mapAttrs;
  if-nginx-enable = mkIf config.nixosSetup.services.nginx.enable;
  cfg = config.nixosSetup.services.nginx;

  # Type aliases for readability
  attributeSetOf = types.attrsOf;
  subModule = types.submodule;
  string = types.str;
  list = types.listOf;
  boolean = types.bool;
  port = types.port;

  # Configurable defaults
  serverName = &amp;quot;localhost&amp;quot;;
  baseListenAddress = &amp;quot;0.0.0.0&amp;quot;;
  basePort = 80;
in
{
  options.nixosSetup.services.nginx = {
    enable = mkEnableOption &amp;quot;Nginx Server&amp;quot;;

    virtualHosts = mkOption {
      type = attributeSetOf (subModule {
        options = {
          root = createOption {
            type = string;
            default = &amp;quot;/var/www/${config.networking.hostName}&amp;quot;;
            description = &amp;quot;Root directory for virtual host&amp;quot;;
          };

          serverName = createOption {
            type = string;
            default = &amp;quot;${serverName}&amp;quot;;
            description = &amp;quot;Server name from virtual host&amp;quot;;
          };

          listenAddresses = createOption {
            type = list (subModule {
              options = {
                addr = createOption {
                  type = string;
                  default = &amp;quot;${baseListenAddress}&amp;quot;;
                  description = &amp;quot;IP address to listen on&amp;quot;;
                };

                port = createOption {
                  type = port;
                  default = basePort;
                  description = &amp;quot;Port to listen on&amp;quot;;
                };

                ssl = mkOption {
                  type = boolean;
                  default = false;
                  description = &amp;quot;Enable SSL for this listener&amp;quot;;
                };
              };
            });

            default = [
              {
                addr = &amp;quot;${baseListenAddress}&amp;quot;;
                port = basePort;
                ssl = false;
              }
            ];
            description = &amp;quot;List of address and ports to listen on&amp;quot;;
          };

          webPage = createOption {
            type = string;
            default = &amp;quot;index.html&amp;quot;;
            description = &amp;quot;Simple Webpage&amp;quot;;
          };

          webPageContent = createOption {
            type = string;
            default = &amp;#39;&amp;#39;
              &amp;lt;!DOCTYPE html&amp;gt;
              &amp;lt;html&amp;gt;
                &amp;lt;head&amp;gt;&amp;lt;title&amp;gt;Welcome to ${config.networking.hostName}&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
                &amp;lt;body&amp;gt;
                  &amp;lt;h1&amp;gt;Hello from ${config.networking.hostName}!&amp;lt;/h1&amp;gt;
                  &amp;lt;p&amp;gt;Served from Nixos declarative config&amp;lt;/p&amp;gt;
                &amp;lt;/body&amp;gt;
              &amp;lt;/html&amp;gt;
            &amp;#39;&amp;#39;;
            description = &amp;quot;My Simple HomePage&amp;quot;;
          };
        };
      });
      default = { };
      description = &amp;quot;Virtual Host Configuration&amp;quot;;
    };
  };

  config = if-nginx-enable {
    services.nginx = {
      enable = true;

      virtualHosts = mapAttribute (name: vhostName: {
        serverName = vhostName.serverName;
        root = vhostName.root;
        listen = vhostName.listenAddresses;
        locations.&amp;quot;/&amp;quot; = {
          index = vhostName.webPage;
        };
      }) cfg.virtualHosts;
    };

    # Create directories and symlink HTML files for each virtual host
    systemd.tmpfiles.rules = lib.flatten (
      lib.mapAttrsToList (name: vhost: [
        &amp;quot;d ${vhost.root} 0755 root root -&amp;quot;
        &amp;quot;L+ ${vhost.root}/${vhost.webPage} - - - - ${builtins.toFile &amp;quot;${name}-${vhost.webPage}&amp;quot; vhost.webPageContent}&amp;quot;
      ]) cfg.virtualHosts
    );
  };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Key Design Decisions&lt;/h3&gt;
&lt;p&gt;Let me break down some of the choices I made:&lt;/p&gt;
&lt;h4&gt;1. Type Aliases for Readability&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;attributeSetOf = types.attrsOf;
subModule = types.submodule;
string = types.str;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I created these aliases to make the code more readable. Yes, they&amp;#39;re just wrappers, but &lt;code&gt;attributeSetOf&lt;/code&gt; is more self-documenting than &lt;code&gt;types.attrsOf&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;2. Smart Defaults&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;default = &amp;quot;/var/www/${config.networking.hostName}&amp;quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The default root directory uses the hostname, which means each host automatically gets a unique path without manual configuration. Small touches like this reduce the cognitive load when setting up new hosts.&lt;/p&gt;
&lt;h4&gt;3. Nested Submodules for Listen Addresses&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;listenAddresses = createOption {
  type = list (subModule {
    options = {
      addr = createOption { ... };
      port = createOption { ... };
      ssl = mkOption { ... };
    };
  });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This allows for multiple listen addresses per virtual host, each with its own IP, port, and SSL settings. It&amp;#39;s flexible without being complicated.&lt;/p&gt;
&lt;h4&gt;4. Transformation with mapAttrs&lt;/h4&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;virtualHosts = mapAttribute (name: vhostName: {
  serverName = vhostName.serverName;
  root = vhostName.root;
  listen = vhostName.listenAddresses;
  locations.&amp;quot;/&amp;quot; = {
    index = vhostName.webPage;
  };
}) cfg.virtualHosts;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is where the magic happens. I take my custom options and transform them into the format that &lt;code&gt;services.nginx.virtualHosts&lt;/code&gt; expects. It&amp;#39;s a clean separation between the interface I want to provide and the underlying NixOS options.&lt;/p&gt;
&lt;h2&gt;Using the Module Across Hosts&lt;/h2&gt;
&lt;p&gt;Now here&amp;#39;s where it all pays off. On any host, I can configure Nginx with minimal code:&lt;/p&gt;
&lt;h3&gt;Host &lt;code&gt;WellsJaha&lt;/code&gt; Simple Setup&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{
  nixosSetup.services.nginx = {
    enable = true;
    virtualHosts.default = {
      serverName = &amp;quot;localhost&amp;quot;;
    };
  };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&amp;#39;s it! Everything else uses smart defaults.&lt;/p&gt;
&lt;h3&gt;Host &lt;code&gt;Octavia&lt;/code&gt; Multiple Virtual Hosts&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{
  nixosSetup.services.nginx = {
    enable = true;

    virtualHosts = {
      main = {
        serverName = &amp;quot;example.com&amp;quot;;
        root = &amp;quot;/var/www/example&amp;quot;;
        listenAddresses = [
          { addr = &amp;quot;0.0.0.0&amp;quot;; port = 80; }
          { addr = &amp;quot;0.0.0.0&amp;quot;; port = 443; ssl = true; }
        ];
      };

      api = {
        serverName = &amp;quot;api.example.com&amp;quot;;
        root = &amp;quot;/var/www/api&amp;quot;;
        listenAddresses = [
          { addr = &amp;quot;10.10.10.6&amp;quot;; port = 443; ssl = true; }
        ];
        webPageContent = &amp;#39;&amp;#39;
          &amp;lt;!DOCTYPE html&amp;gt;
          &amp;lt;html&amp;gt;
            &amp;lt;head&amp;gt;&amp;lt;title&amp;gt;API Server&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
            &amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;API Documentation&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;
          &amp;lt;/html&amp;gt;
        &amp;#39;&amp;#39;;
      };
    };
  };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Host &lt;code&gt;Bellamy&lt;/code&gt; Custom Everything&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{
  nixosSetup.services.nginx = {
    enable = true;
    virtualHosts.custom = {
      serverName = &amp;quot;custom.local&amp;quot;;
      root = &amp;quot;/srv/www/custom&amp;quot;;
      webPage = &amp;quot;home.html&amp;quot;;
      listenAddresses = [
        { addr = &amp;quot;192.168.1.100&amp;quot;; port = 8080; }
      ];
      webPageContent = builtins.readFile ./custom-page.html;
    };
  };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;What I Learned&lt;/h2&gt;
&lt;p&gt;This journey taught me several important lessons about NixOS and Nix:&lt;/p&gt;
&lt;h4&gt;Start Simple, Refactor When Needed&lt;/h4&gt;
&lt;p&gt;My simple configuration wasn&amp;#39;t wrong—it was the right solution for that moment. Refactoring came naturally when I hit a real need. Don&amp;#39;t over-engineer from the start.&lt;/p&gt;
&lt;h4&gt;The Module System is Powerful&lt;/h4&gt;
&lt;p&gt;NixOS modules aren&amp;#39;t just about organizing code—they&amp;#39;re about creating interfaces. Good modules hide complexity and expose just what&amp;#39;s needed.&lt;/p&gt;
&lt;h4&gt;Types Are Your Friend&lt;/h4&gt;
&lt;p&gt;The type system caught several bugs during development. When you declare &lt;code&gt;type = types.port&lt;/code&gt;, Nix validates the input. This is huge for maintainability.&lt;/p&gt;
&lt;h3&gt;Defaults Matter&lt;/h3&gt;
&lt;p&gt;Thoughtful defaults reduce configuration burden. The &lt;code&gt;${config.networking.hostName}&lt;/code&gt; trick means I rarely need to specify the root directory explicitly.&lt;/p&gt;
&lt;h3&gt;The Nix Store is Central&lt;/h3&gt;
&lt;p&gt;Understanding how to work with the Nix store (like using &lt;code&gt;L+&lt;/code&gt; for symlinks) is fundamental to writing good Nix code. Fight with it and you&amp;#39;ll suffer; work with it and everything becomes elegant.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Refactoring my Nginx configuration from a simple, hardcoded setup to a flexible, reusable module wasn&amp;#39;t just about making my life easier (though it definitely did). It was about learning to think in Nix about understanding options, types, submodules, and the art of creating good abstractions.&lt;/p&gt;
&lt;p&gt;If you&amp;#39;re learning NixOS, I encourage you to try something similar. Take a simple configuration you&amp;#39;ve written, identify the parts you&amp;#39;d want to reuse, and try making it modular. You&amp;#39;ll learn more in the process than you would from any tutorial.&lt;/p&gt;
&lt;p&gt;And remember: sometimes being a bit crazy about optimization is exactly what pushes you to learn something new. 😄&lt;/p&gt;
</content:encoded></item><item><title>Worrying Not the Enemy</title><link>https://blog.thein3rovert.dev/posts/nixos/worrying---maybe-its-not-the-enemy/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/worrying---maybe-its-not-the-enemy/</guid><description>Worry is our friend.</description><pubDate>Tue, 07 Oct 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h1&gt;Worrying: Maybe It’s Not the Enemy?&lt;/h1&gt;
&lt;p&gt;I am currently reading a book on worry because I thought it was a very important thing for me to address, especially for myself. I know other people also worry. When I refer to worry, I mean &lt;strong&gt;being always constantly worried about the future&lt;/strong&gt;. It involves ending up &lt;strong&gt;creating scenarios that are bad for you&lt;/strong&gt;. You overthink that scenario to the extent that you feel there is always going to be a bad outcome from it.&lt;/p&gt;
&lt;p&gt;I do not know if anyone else feels this way, but I certainly do. I worry too much. I think this is because I am a very overthinking person. I am unsure if that is a good thing or a bad thing. Maybe in some cases it is good, but honestly, not in all cases. My entire day is spent thinking about future things that might not go the way I expect.&lt;/p&gt;
&lt;p&gt;I create these scenarios, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;quot;What if I get to this place and this happens?&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;What if this person actually does this, and this bad thing happens?&amp;quot;&lt;/li&gt;
&lt;li&gt;&amp;quot;What if I actually get what I want, but something comes up again, and I lost it all?&amp;quot;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I constantly worry about everything, and my brain just does not stop. Sometimes this overwhelming feeling makes me feel really sick. I realize I spent my entire day worried about something, but those worries did not turn out to be true. That worry is only in my thought; that is not reality. When the actual event happens in the next few days, &lt;strong&gt;it is completely different from what I imagined it would be&lt;/strong&gt;. It goes completely well, and what I thought would happen actually goes right.&lt;/p&gt;
&lt;h2&gt;A Shift in Perspective&lt;/h2&gt;
&lt;p&gt;I have decided now to stop, not to stop worrying but to stop creating these negative scenarios whenever I feel worried. Although I haven&amp;#39;t finished the book I am reading yet, I was able to pick up a few key things. The book states that &lt;strong&gt;worrying is good&lt;/strong&gt;. It is very good to worry, which sounds weird, but the book states this. It says that when you worry, it is a sign. It specifies &lt;strong&gt;worrying as a means to solving problems&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The book explains that the reason we worry is because &lt;strong&gt;our mind wants us to solve a particular problem&lt;/strong&gt;. It wants us to find the solution. There is no way for us to find a solution to a problem if we do not worry about it. Therefore, worrying means we need to find a solution. It is quite funny, but true: if we decide to remove our worries, how would we solve our problems? We would not worry about the problem, and thus it would not be solved.&lt;/p&gt;
&lt;p&gt;I usually think of worrying as a bad thing. I believe this interpretation comes from culture. When I translate the English word &amp;#39;worry&amp;#39; into a word in my culture, it carries a totally different meaning. I associated &amp;#39;worry&amp;#39; with a word in my culture called &lt;strong&gt;ironu&lt;/strong&gt;. &lt;strong&gt;Ironu&lt;/strong&gt; is defined as overthinking. Although overthinking and worrying are two completely different things, I tend to see them as the same, because my mind overthinks and worries simultaneously.&lt;/p&gt;
&lt;h2&gt;The Real Issue: Overthinking the Worry&lt;/h2&gt;
&lt;p&gt;Worrying is good. &lt;strong&gt;It is the overthinking that is not good&lt;/strong&gt;. You cannot completely take worrying out of your life because it is necessary. I need to be able to worry about things because I need to find solutions for them. What I must stop doing is &lt;strong&gt;overthinking the worry&lt;/strong&gt;. Overthinking leads to creating fake scenarios in my head, resulting in a lot of anxiety and nauseousness. I feel sick, and my entire day does not go well. The haunting scenarios created in my mind based on my worries are what cause me to have a bad day eventually.&lt;/p&gt;
&lt;p&gt;To clarify: Overthinking is not a good thing to do. &lt;strong&gt;Worrying is good because it helps us solve our problems&lt;/strong&gt;, but overthinking is a very different thing.&lt;/p&gt;
&lt;p&gt;That is all I have for today. I am going for a walk, and I just wanted to share this insight. I plan to continue reading the book because I feel like I will learn a lot from it that would help me. As I discover new things, I will share them.&lt;/p&gt;
</content:encoded></item><item><title>agenix -&gt; ragenix</title><link>https://blog.thein3rovert.dev/posts/nixos/agenix-to-ragenix/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/agenix-to-ragenix/</guid><description>Easy switch to ragenix</description><pubDate>Mon, 08 Sep 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently, I made the switch from &lt;strong&gt;agenix&lt;/strong&gt; to &lt;strong&gt;ragenix&lt;/strong&gt;. Ragenix is essentially an improved version of agenix, and, interestingly, it’s written in Rust. The transition felt pretty smooth overall.&lt;/p&gt;
&lt;p&gt;To be honest, I didn’t have to change much to get things working. The main thing I did was update the URL path in my &lt;code&gt;flake.nix&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-diff&quot;&gt;agenix = {
  inputs.nixpkgs.follows = &amp;quot;nixpkgs&amp;quot;;
- url = &amp;quot;github:ryantm/agenix&amp;quot;;
+ url = &amp;quot;github:yaxitech/ragenix&amp;quot;;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That was pretty much it for the migration. I appreciated how straightforward it was.&lt;/p&gt;
&lt;p&gt;However, I did notice a couple of differences. With agenix, whenever I added a new secret to my &lt;code&gt;secret.nix&lt;/code&gt; file and specified a path, running the secret creation command would automatically create the directory structure for me. Ragenix doesn’t do this yet, so I have to make sure the directories exist beforehand. It’s a small thing, but it stood out to me.&lt;/p&gt;
&lt;p&gt;Another thing I observed is that the &lt;code&gt;--rekey&lt;/code&gt; command in ragenix only works on all secrets at once, not on individual secrets. I’m hoping this will change in the future, as it would be nice to have more granular control.&lt;/p&gt;
&lt;p&gt;Here are the updated commands I’m using for creating and rekeying secrets:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;# Using nix run
nix run github:yaxitech/ragenix -- -e &amp;lt;path-to-secret.age&amp;gt; --identity &amp;lt;path-to-ssh-key&amp;gt;
nix run github:yaxitech/ragenix -- --rekey --identity &amp;lt;path-to-ssh-key&amp;gt;

# If installed as CLI
agenix -e &amp;lt;path-to-secret.age&amp;gt; --identity &amp;lt;path-to-ssh-key&amp;gt;
agenix --rekey --identity &amp;lt;path-to-ssh-key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The move to ragenix has been pretty painless, and I’m looking forward to seeing how it evolves. I’ll keep an eye out for updates, especially around secret path creation and more flexible rekeying.&lt;/p&gt;
</content:encoded></item><item><title>Issue with Colmena</title><link>https://blog.thein3rovert.dev/posts/nixos/colmena/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/colmena/</guid><description>Resolving Colmena integration issues with Nix flakes by correctly configuring inputs and addressing infinite recursion errors</description><pubDate>Mon, 21 Jul 2025 10:55:00 GMT</pubDate><content:encoded>&lt;p&gt;So, I’ve been having this issue with Colmena. I added the Colmena input to my &lt;code&gt;flake.nix&lt;/code&gt; for my system and also exposed it as an output so both my flake and NixOS can make use of it. But here’s the problem—it’s not visible to my system. When I try to use it, it’s just not there.&lt;/p&gt;
&lt;p&gt;At first, I thought maybe I messed up the configuration or missed an update in the documentation. For now, I’ve been using it via &lt;code&gt;nix-shell&lt;/code&gt;, which works, but I have to pass the &lt;code&gt;--impure&lt;/code&gt; flag, and honestly, I’m not a fan of that. Let’s fix this.&lt;/p&gt;
&lt;hr&gt;
&lt;h3&gt;Step 1: Adding Colmena Input to Flake&lt;/h3&gt;
&lt;p&gt;First, I added Colmena as an input in my &lt;code&gt;flake.nix&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;    # ADDED: Colmena input
    colmena.url = &amp;quot;github:zhaofengli/colmena&amp;quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, I added it to the outputs:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;  outputs =
    {
      self,
      home-manager,
      nixpkgs,
      nix-colors,
      ghostty,
      agenix,
      disko,
      colmena,
      ...
  };
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Step 2: Running &lt;code&gt;nix flake check&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;When I ran &lt;code&gt;nix flake check&lt;/code&gt;, I got these warnings:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;warning: unknown flake output &amp;#39;colmena&amp;#39;
warning: unknown flake output &amp;#39;colmenaHive&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This confused me because I followed the exact setup from the official Colmena documentation on using flakes: &lt;a href=&quot;https://colmena.cli.rs/unstable/tutorial/flakes.html&quot;&gt;Colmena Using Flakes&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Here’s my current Colmena config in the flake:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;  colmenaHive = colmena.lib.makeHive self.outputs.colmena;
      colmena = {
        meta = {
          nixpkgs = import nixpkgs {
            system = &amp;quot;x86_64-linux&amp;quot;;
          };
        };

        # Deployment Nodes
        demo = {
          deployment = {
            targetHost = &amp;quot;demo&amp;quot;;
            targetPort = 22;
            targetUser = &amp;quot;thein3rovert&amp;quot;;
            buildOnTarget = true;
            tags = [ &amp;quot;homelab&amp;quot; ]; # TODO: Change tag later
          };
          imports = [
            ./hosts/demo
            inputs.disko.nixosModules.disko
          ];
          time.timeZone = &amp;quot;Europe/London&amp;quot;;
        };
      };
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The issue is that my flake isn’t detecting the &lt;code&gt;colmena&lt;/code&gt; output. It says Colmena now uses a new output called &lt;code&gt;colmenaHive&lt;/code&gt;, which I already exposed. For now, I’ll hold off on using Colmena from the input flake and try it with &lt;code&gt;nix-shell&lt;/code&gt; instead:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;nix shell github:zhaofengli/colmena
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Second Issue: Infinite Recursion in Shell&lt;/h3&gt;
&lt;p&gt;When I tried running Colmena commands from the shell, I hit another error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;error: infinite recursion encountered
       at /nix/store/126fp22lvqmnfv1p290vcpmbf8yab4a5-source/lib/modules.nix:652:66:
          651|       extraArgs = mapAttrs (
          652|         name: _: addErrorContext (context name) (args.${name} or config._module.args.${name})
             |                                                                  ^
          653|       ) (functionArgs f);
[ERROR] -----
[ERROR] Operation failed with error: Child process exited with error code: 1
Hint: Backtrace available - Use `RUST_BACKTRACE=1` environment variable to display a backtrace
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I wasn’t sure where this recursion was coming from, but based on the error, I suspected it was caused by excessive use of:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;specialArgs = { inherit inputs outputs nix-colors; };
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The infinite recursion error was likely caused by passing the &lt;code&gt;inputs&lt;/code&gt; argument to modules that were already declared elsewhere. Since I have a common folder shared between hosts, this was creating conflicts.&lt;/p&gt;
&lt;p&gt;Using &lt;code&gt;--show-trace&lt;/code&gt;, I traced the issue to this part of my config:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; from call site
         at /nix/store/wnj1d9ysl3k95rpajdkdm8a5igl7ywa7-source/hosts/common/default.nix:12:5:
           11|     ./users
           12|     inputs.home-manager.nixosModules.home-manager
             |     ^
           13|   ];
...
   … while evaluating the module argument `inputs&amp;#39; in &amp;quot;/nix/store/wnj1d9ysl3k95rpajdkdm8a5igl7ywa7-source/hosts/common&amp;quot;:

       error: infinite recursion encountered
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To troubleshoot, I removed the Colmena config from my &lt;code&gt;flake.nix&lt;/code&gt; input and the hive config entirely. Using only the Colmena shell, I ran &lt;code&gt;colmena apply&lt;/code&gt; again and got this error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[INFO ] Using flake: git+file:///home/thein3rovert/thein3rovert-flake
error: flake &amp;#39;git+file:///home/thein3rovert/thein3rovert-flake&amp;#39; does not provide attribute &amp;#39;packages.x86_64-linux.colmenaHive&amp;#39;, &amp;#39;legacyPackages.x86_64-linux.colmenaHive&amp;#39; or &amp;#39;colmenaHive&amp;#39;
[ERROR] -----
[ERROR] Operation failed with error: Child process exited with error code: 1
Hint: Backtrace available - Use `RUST_BACKTRACE=1` environment variable to display a backtrace
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Turns out, Colmena expects the &lt;code&gt;colmenaHive&lt;/code&gt; output to be present. To fix this, I stopped adding the common config to the new VM, so it doesn’t contain the conflicting &lt;code&gt;home-manager&lt;/code&gt; configuration.&lt;/p&gt;
&lt;h3&gt;The Fix&lt;/h3&gt;
&lt;p&gt;I finally figured out the issue. Colmena behaves differently from &lt;code&gt;nixos-rebuild&lt;/code&gt; when it comes to passing flake inputs. With &lt;code&gt;nixos-rebuild&lt;/code&gt;, flake inputs are automatically passed to all modules. With Colmena, you need to explicitly pass them.&lt;/p&gt;
&lt;p&gt;Initially, I was doing this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{
  inputs = {
    # ... existing inputs

    # Uncomment and fix Colmena input
    colmena = {
      url = &amp;quot;github:zhaofengli/colmena&amp;quot;;
    };
  };

  # ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But this only passes the Colmena GitHub URL without including the flake inputs. Here’s the correct way:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;{
  inputs = {
    # ... existing inputs

    # Uncomment and fix Colmena input
    colmena = {
      url = &amp;quot;github:zhaofengli/colmena&amp;quot;;
      inputs.nixpkgs.follows = &amp;quot;nixpkgs&amp;quot;; # Pass in the flake input
    };
  };

  # ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, in the Colmena configuration within the flake, I needed to pass the inputs to all nodes using the &lt;code&gt;specialArgs&lt;/code&gt; function:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;colmena = {
  meta = {
    nixpkgs = import nixpkgs {
      system = &amp;quot;x86_64-linux&amp;quot;;
    };
    # Pass inputs to all nodes
    specialArgs = { inherit inputs outputs; };
  };
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;According to the latest Colmena release, if you’re using the newest version, the &lt;code&gt;colmenaHive&lt;/code&gt; output must be present in your flake. As stated in the documentation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Colmena reads the &lt;code&gt;colmenaHive&lt;/code&gt; output in your Flake, generated with &lt;code&gt;colmena.lib.makeHive&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;You can read more about it in their official documentation &lt;a href=&quot;https://colmena.cli.rs/unstable/tutorial/flakes.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Since I’m running the older version of Colmena via &lt;code&gt;nix-shell&lt;/code&gt;, I don’t need to worry about the &lt;code&gt;colmenaHive&lt;/code&gt; output. However, when deploying a new node, I still have to add the &lt;code&gt;--impure&lt;/code&gt; flag:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;colmena apply --on &amp;lt;nodename&amp;gt; --impure
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And it should deploy successfully:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;INFO ] Selected all 1 nodes.
      🕑 4s 1 running
 demo 🕑 4s     &amp;#39;github:nixos/nixpkgs/77b584d61ff80b4cef9245829a6f1dfad5afdfa3?narHash=sha256-bmEPmSjJakAp/JojZRrUvNcDX2R5/nuX6b
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;h3&gt;Key Insights and Takeaways&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Explicit Input Passing&lt;/strong&gt;: Unlike &lt;code&gt;nixos-rebuild&lt;/code&gt;, Colmena requires you to explicitly pass flake inputs to all modules using &lt;code&gt;specialArgs&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ColmenaHive Requirement&lt;/strong&gt;: If you’re using the latest version of Colmena, ensure your flake exposes the &lt;code&gt;colmenaHive&lt;/code&gt; output.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Avoid Infinite Recursion&lt;/strong&gt;: Be cautious when sharing configurations between hosts. Conflicts in &lt;code&gt;specialArgs&lt;/code&gt; or overlapping inputs can lead to infinite recursion errors.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use &lt;code&gt;--show-trace&lt;/code&gt;&lt;/strong&gt;: This flag is invaluable for debugging issues in your configuration.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;What I’d Do Differently&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Read the Docs Thoroughly&lt;/strong&gt;: I would double-check the latest Colmena documentation before diving into implementation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Test Incrementally&lt;/strong&gt;: Instead of adding everything at once, I’d test each part of the configuration step-by-step to catch issues early.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Avoid Legacy Methods&lt;/strong&gt;: While using &lt;code&gt;nix-shell&lt;/code&gt; worked as a temporary fix, I’d aim to transition fully to the latest Colmena features and workflows.&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>Manual to Bash</title><link>https://blog.thein3rovert.dev/posts/nixos/automating-my-blog-deployment-a-journey-from-manual-to-bash/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/automating-my-blog-deployment-a-journey-from-manual-to-bash/</guid><description>My attempt at automating my blog deployment with bash scripting as a complete beginner, documenting all the mistakes, errors, and small wins along the way to finally getting ./deploy.sh to work.</description><pubDate>Sat, 21 Jun 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&amp;#39;ve been manually deploying my blog for way too long, and honestly, it was getting tedious. After reading this great article about &lt;a href=&quot;https://medium.com/bugbountywriteup/from-bash-to-github-actions-automating-ci-cd-for-a-real-world-saas-project-d89b251cd371&quot;&gt;From bash to github action&lt;/a&gt;, I got inspired to finally automate my deployment process. My plan is simple: start with bash scripting to learn the fundamentals, then eventually graduate to proper CI/CD pipelines.&lt;/p&gt;
&lt;h2&gt;Starting Simple: My First Deployment Script&lt;/h2&gt;
&lt;p&gt;I&amp;#39;m currently using zsh, so my script will likely have some zsh syntax initially, but I&amp;#39;ll make sure to keep it bash-compatible as I go. Here&amp;#39;s what I came up with for my first attempt:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/bin/bash

# deploy.sh - This script is using for automating my personal blog deployment which is built with astro

set -e # Exit script on fail command  TODO: Print out a message on failed command

echo &amp;quot;Deployment Starting...&amp;quot;

git pull origin master # TODO: Add steps to choose branch types or automatically detect if its master or main

npm ci # Install dependencies ( Clean install )

npm run test #  Run test to make sure everything is good

# TODO: Check if the container is currently running
# and if it is terminate and build again

npm run build # Build the application by running the build script in the package.json

# Run database migration by running the migration script in the package.json ( I dont have one now so)
# Add option to select of migration script exit and if not echo a message
npm run migrate

pm2 restart all # Restart all application managed by Node process manager
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Writing this script was actually pretty enlightening. I found myself adding TODO comments as ideas popped up - things like giving developers feedback on what&amp;#39;s happening, handling different branch types, and checking if containers are already running. It felt natural to think through these edge cases as I was coding.&lt;/p&gt;
&lt;p&gt;I also learned a few things while writing this. The &lt;code&gt;ci&lt;/code&gt; in &lt;code&gt;npm ci&lt;/code&gt; stands for &amp;quot;clean install&amp;quot; - I&amp;#39;ve probably used this command before but never really thought about what it meant. And &lt;code&gt;pm2&lt;/code&gt; is a Node process manager, which was completely new to me.&lt;/p&gt;
&lt;h2&gt;Reality Check: First Deployment Attempt&lt;/h2&gt;
&lt;p&gt;Of course, nothing ever works on the first try. I immediately hit this error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;zsh: ./deploy.sh: bad interpreter: /bin/bash: no such file or directory
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Had to change the shebang to work with my system. Then the build failed because I don&amp;#39;t actually have test scripts set up yet - there should definitely be a conditional check for that. Next, &lt;code&gt;pm2&lt;/code&gt; wasn&amp;#39;t installed, which made me realize I need better dependency checking.&lt;/p&gt;
&lt;p&gt;But here&amp;#39;s the thing that really clicked for me: my site is static! I&amp;#39;m doing static hosting with Docker, so I don&amp;#39;t need pm2 at all - that&amp;#39;s only for SSR or custom Node servers. The Docker container handles serving my static &lt;code&gt;dist&lt;/code&gt; folder just fine.&lt;/p&gt;
&lt;h2&gt;Iteration 2: Docker-Focused Approach&lt;/h2&gt;
&lt;p&gt;This realization led me to completely rethink my approach. Here&amp;#39;s the updated script:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/usr/bin/env bash

# deploy.sh - This script is using for automating my personal blog deployment which is built with astro

set -e # Exit script on fail command  TODO: Print out a message on failed command

echo &amp;quot;Deployment Starting...&amp;quot;

git pull origin master # TODO: Add steps to choose branch types or automatically detect if its master or main

npm ci # Install dependencies ( Clean install )

# npm run test #  Run test to make sure everything is good

# TODO: Check if the container is currently running
# and if it is terminate and build again

npm run build # Build the application by running the build script in the package.json

# Run database migration by running the migration script in the package.json ( I dont have one now so)
# Add option to select of migration script exit and if not echo a message
# npm run migrate

# pm2 restart all # Restart all application managed by Node process manager ( Not needed for static files )

echo &amp;quot;Building application completed successfully&amp;quot;

echo &amp;quot;Stoping the running application...&amp;quot;

# Confirm is using sudo or user
sudo podman compose down

sudo podman compose up --build -d

echo &amp;quot;Deployment completed successfully&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Much cleaner! I commented out the parts I don&amp;#39;t need and focused on what actually matters for my setup. But I still couldn&amp;#39;t run this on my server because I needed more safety checks.&lt;/p&gt;
&lt;h2&gt;Adding Safety and Flexibility&lt;/h2&gt;
&lt;p&gt;Running deployment scripts can be dangerous if you&amp;#39;re not careful. I realized I needed two important safety measures:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Branch verification&lt;/strong&gt; - Make sure I&amp;#39;m deploying from the right branch&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Root user safety&lt;/strong&gt; - Avoid running everything as root&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here&amp;#39;s my final version with these improvements:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;#!/usr/bin/env bash

# deploy.sh - Deploy Astro project with Docker Compose

set -e # Exit script on fail command  TODO: Print out a message on failed command

ALLOWED_BRANCH=&amp;quot;master&amp;quot; # Branch used for deplyoment ( main or master)

# Get the current branch
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)

# Check the correct branch
if [ &amp;quot;$CURRENT_BRANCH&amp;quot; != &amp;quot;$ALLOWED_BRANCH&amp;quot; ]; then
  echo &amp;quot;You are on the branch &amp;#39;$CURRENT_BRANCH&amp;#39;. Please switch to &amp;#39;$ALLOWED_BRANCH&amp;#39; before deploying.&amp;quot;
  exit 1
fi

echo &amp;quot;Now on the correct branch: $CURRENT_BRANCH&amp;quot;

echo &amp;quot;Deployment Starting...&amp;quot;

git pull origin &amp;quot;$ALLOWED_BRANCH&amp;quot;

npm ci # Install dependencies ( Clean install )

npm run build # Build the application by running the build script in the package.json

echo &amp;quot;Building application completed successfully&amp;quot;

echo &amp;quot;Stoping the running application...&amp;quot;

read -p &amp;quot;Use sudo for Docker Compose? (y/N): &amp;quot; USE_SUDO
COMPOSE_CMD=&amp;quot;podman compose&amp;quot;
if [[ &amp;quot;$USE_SUDO&amp;quot; == [yY] ]]; then
  COMPOSE_CMD=&amp;quot;sudo $COMPOSE_CMD&amp;quot;
fi

echo &amp;quot;Now using &amp;#39;$COMPOSE_CMD&amp;#39; &amp;quot;

$COMPOSE_CMD down
$COMPOSE_CMD up --build -d

echo &amp;quot;🚀 Deployment completed successfully&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The branch check is straightforward - it gets the current branch and compares it to what&amp;#39;s allowed. If you&amp;#39;re on the wrong branch, it stops you right there. No accidental deployments from feature branches!&lt;/p&gt;
&lt;p&gt;The sudo prompt gives me flexibility. Sometimes I need root permissions for Docker operations, sometimes I don&amp;#39;t. Rather than hardcoding it, I can decide at runtime.&lt;/p&gt;
&lt;h2&gt;Success! First Automated Deployment&lt;/h2&gt;
&lt;p&gt;Finally ran the script on my server and got this satisfying output:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Stoping the running application...
Use sudo for Docker Compose? (y/N): y
Now using &amp;#39;sudo podman compose&amp;#39;
🚀 Deployment completed successfully
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That little rocket emoji at the end felt like such a win!&lt;/p&gt;
&lt;h2&gt;What I Learned and What&amp;#39;s Next&lt;/h2&gt;
&lt;p&gt;This whole process taught me a few things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Start simple&lt;/strong&gt; - My first script shouldn&amp;#39;t way more complex than needed&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Understand your stack&lt;/strong&gt; - Realizing I needed Docker commands instead of pm2 was a game-changer&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Safety first&lt;/strong&gt; - Branch checks and user prompts prevent costly mistakes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Iterate quickly&lt;/strong&gt; - Each failure taught me something valuable&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I know this script is pretty basic, but I&amp;#39;m enjoying the learning process and don&amp;#39;t want to overcomplicate things yet. There&amp;#39;s obviously a lot more I could add but my goal is to learn gradually and understand each piece.&lt;/p&gt;
&lt;p&gt;Next up, I want to explore some improvements to this script. Maybe add some logging, better error messages, or even a simple configuration file. Then eventually, I&amp;#39;ll take what I&amp;#39;ve learned here and apply it to a proper CI/CD pipeline with GitHub Actions.&lt;/p&gt;
&lt;p&gt;The journey from manual deployment to automation is pretty satisfying, even in these small steps. Every time I run &lt;code&gt;./deploy.sh&lt;/code&gt; instead of remembering all those commands manually, I feel like I&amp;#39;m becoming a slightly better developer.&lt;/p&gt;
</content:encoded></item><item><title>My first nix package</title><link>https://blog.thein3rovert.dev/posts/nixos/building--my-first-nix-package/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/building--my-first-nix-package/</guid><description>This post describes the process of creating a Nix package for the Gruvbox Factory CLI tool on NixOS, including using nix-init for initialization, handling dependency issues, and successfully building and testing the package</description><pubDate>Sat, 12 Apr 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h1&gt;Creating Nix Packages&lt;/h1&gt;
&lt;h2&gt;What is nix-init?&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;nixinit&lt;/code&gt; is a cli tool for initialising a package, from my own understanding.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;[!cite]&lt;br&gt;My workflow involves using &lt;a href=&quot;https://github.com/nix-community/nix-init&quot;&gt;nix-init&lt;/a&gt; to create the package file and then the &lt;code&gt;nix build&lt;/code&gt; command for make sure the pacakge file works. --li yang&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Project Goal&lt;/h2&gt;
&lt;p&gt;My goal was to package the gruvbox factory CLI tool for NixOS. After checking the nixos packages website and finding no existing package for the gruvbox tool, I decided to build it myself to learn the package creation process while working with something I wanted to use. I&amp;#39;m fond of the gruvbox theme for its eye-friendly colors, and while I ultimately want to apply this theme across all my applications, my immediate goal was to create a package that would let me convert images to use the gruvbox color palette. This would allow me to create custom gruvbox-themed images.&lt;/p&gt;
&lt;p&gt;Source: &lt;a href=&quot;https://github.com/paulopacitti/gruvbox-factory&quot;&gt;https://github.com/paulopacitti/gruvbox-factory&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Steps to Create the Package&lt;/h2&gt;
&lt;h3&gt;1. Install nix-init&lt;/h3&gt;
&lt;p&gt;Using a nix shell to access it without polluting my system just in case:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;nix shell nixpkgs#nix-init
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2. Run nix-init&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;nix-init
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Follow the prompts:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Enter url
❯ https://github.com/paulopacitti/gruvbox-factory
Enter tag or revision (defaults to v2.0.0)
❯ v2.0.0
Enter version
❯ 2.0.0
Enter pname
❯ gruvbox-factory
How should this package be built?
❯ buildPythonApplication
Enter output path (leave as empty for the current directory)
❯ .
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I kept pressing enter for the other options until I reached &lt;code&gt;How should this package be built?&lt;/code&gt; where you get to select how you want the package built. I selected &lt;code&gt;buildPythonApplication&lt;/code&gt; since I am following the documentation &amp;quot;My Nix Journey - How to Use Nix to Setup a Dev Environment&amp;quot; by Li Yang.&lt;/p&gt;
&lt;h3&gt;3. Get Flake Template&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;nix flake init --template github:liyangau/flake-templates#local
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;4. Spawn Shell with Nix File&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;nix develop -c $SHELL
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Troubleshooting Dependencies&lt;/h2&gt;
&lt;h3&gt;Initial Error&lt;/h3&gt;
&lt;p&gt;After running the nix develop command, I ran into package dependencies issues with some dependencies versions not matching due to the fact that the package from the source might have hardcoded or pinned some dependencies version and nix is trying to use an updated version or a different version of that dependencies.&lt;br&gt;![[Creating my first nix packages-1744460587273.png]]&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;nix develop -c $SHELL
warning: creating lock file &amp;#39;/home/thein3rovert/Documents/01_Project/Builds/default/flake.lock&amp;#39;:
• Added input &amp;#39;nixpkgs&amp;#39;:
    &amp;#39;github:nixos/nixpkgs/d19cf9dfc633816a437204555afeb9e722386b76?narHash=sha256-lzFCg/1C39pyY2hMB2gcuHV79ozpOz/Vu15hdjiFOfI%3D&amp;#39;(2025-04-10)
• Added input &amp;#39;systems&amp;#39;:
    &amp;#39;github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e?narHash=sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768%3D&amp;#39; (2023-04-09)
error: builder for &amp;#39;/nix/store/l576pwlj50bhngb87fk8qdrm2aazzffs-gruvbox-factory-2.0.0.drv&amp;#39; failed with exit code 1;
       last 25 log lines:
       &amp;gt; copying build/lib/factory/__main__.py -&amp;gt; build/bdist.linux-x86_64/wheel/./factory
       &amp;gt; running install_egg_info
       &amp;gt; Copying gruvbox_factory.egg-info to build/bdist.linux-x86_64/wheel/./gruvbox_factory-2.0.0-py3.12.egg-info
       &amp;gt; running install_scripts
       &amp;gt; creating build/bdist.linux-x86_64/wheel/gruvbox_factory-2.0.0.dist-info/WHEEL
       &amp;gt; creating &amp;#39;/build/source/dist/.tmp-a1x6ri75/gruvbox_factory-2.0.0-py3-none-any.whl&amp;#39; and adding &amp;#39;build/bdist.linux-x86_64/wheel&amp;#39; to it
       &amp;gt; adding &amp;#39;factory/__main__.py&amp;#39;
       &amp;gt; adding &amp;#39;factory/gruvbox-mix.txt&amp;#39;
       &amp;gt; adding &amp;#39;factory/gruvbox-pink.txt&amp;#39;
       &amp;gt; adding &amp;#39;factory/gruvbox-white.txt&amp;#39;
       &amp;gt; adding &amp;#39;gruvbox_factory-2.0.0.dist-info/LICENSE&amp;#39;
       &amp;gt; adding &amp;#39;gruvbox_factory-2.0.0.dist-info/METADATA&amp;#39;
       &amp;gt; adding &amp;#39;gruvbox_factory-2.0.0.dist-info/WHEEL&amp;#39;
       &amp;gt; adding &amp;#39;gruvbox_factory-2.0.0.dist-info/entry_points.txt&amp;#39;
       &amp;gt; adding &amp;#39;gruvbox_factory-2.0.0.dist-info/top_level.txt&amp;#39;
       &amp;gt; adding &amp;#39;gruvbox_factory-2.0.0.dist-info/RECORD&amp;#39;
       &amp;gt; removing build/bdist.linux-x86_64/wheel
       &amp;gt; Successfully built gruvbox_factory-2.0.0-py3-none-any.whl
       &amp;gt; Finished creating a wheel...
       &amp;gt; Finished executing pypaBuildPhase
       &amp;gt; Running phase: pythonRuntimeDepsCheckHook
       &amp;gt; Executing pythonRuntimeDepsCheck
       &amp;gt; Checking runtime dependencies for gruvbox_factory-2.0.0-py3-none-any.whl
       &amp;gt;   - numpy==2.2.2 not satisfied by version 2.2.3
       &amp;gt;   - setuptools==75.8.0 not satisfied by version 75.8.2.post0
       For full logs, run &amp;#39;nix log /nix/store/l576pwlj50bhngb87fk8qdrm2aazzffs-gruvbox-factory-2.0.0.drv&amp;#39;.
error: 1 dependencies of derivation &amp;#39;/nix/store/10wqs24cswkfzqgsshgax6x6p16clm5p-nix-shell-env.drv&amp;#39; failed to build
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Debugging Process&lt;/h3&gt;
&lt;p&gt;I added the following code to patch the version for the affected dependencies in my default.nix file:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;postPatch = &amp;#39;&amp;#39;
  substituteInPlace pyproject.toml \
    --replace &amp;#39;numpy == 2.2.2&amp;#39; &amp;#39;numpy &amp;gt;= 2.2.2&amp;#39; \
    --replace &amp;#39;setuptools == 75.8.0&amp;#39; &amp;#39;setuptools &amp;gt;= 75.8.0&amp;#39;
&amp;#39;&amp;#39;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This code checks the pyproject.toml file for the package versions and updates them to accept any version higher than the specified version.&lt;/p&gt;
&lt;p&gt;Running &lt;code&gt;nix develop -c $SHELL&lt;/code&gt; again produced the same error. I checked the pyproject.toml file to verify dependencies and their location, confirming it was in the root directory as expected.&lt;/p&gt;
&lt;p&gt;Upon reviewing the default.nix file again, I noticed an issue with the postPatch code. The original pyproject.toml had the dependencies written without spaces:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-toml&quot;&gt;&amp;quot;numpy==2.2.2&amp;quot;,
&amp;quot;setuptools==75.8.0&amp;quot;,
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But my patch code had spaces:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;&amp;quot;numpy == 2.2.2&amp;quot;,
&amp;quot;setuptools == 75.8.0&amp;quot;,
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After fixing the spacing, I ran &lt;code&gt;nix develop&lt;/code&gt; again but encountered a new error message complaining that it couldn&amp;#39;t find the module &amp;#39;gruvbox-factory&amp;#39; during the pythonImportsCheckPhase.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;nix develop -c $SHELL
error: builder for &amp;#39;/nix/store/w9wfy7wp88mr8grz8vgmcdbdfqmv9hfj-gruvbox-factory-2.0.0.drv&amp;#39; failed with exit code 1;
       last 25 log lines:
       &amp;gt; stripping (with command strip and flags -S -p) in  /nix/store/8v353k03iyxfkp0hlclidzh5ywzzys5j-gruvbox-factory-2.0.0/lib/nix/store/8v353k03iyxfkp0hlclidzh5ywzzys5j-gruvbox-factory-2.0.0/bin
       &amp;gt; shrinking RPATHs of ELF executables and libraries in /nix/store/xdxwy3ajnwyqpllkizb3wzga6vnpg3w4-gruvbox-factory-2.0.0-dist
       &amp;gt; checking for references to /build/ in /nix/store/xdxwy3ajnwyqpllkizb3wzga6vnpg3w4-gruvbox-factory-2.0.0-dist...
       &amp;gt; patching script interpreter paths in /nix/store/xdxwy3ajnwyqpllkizb3wzga6vnpg3w4-gruvbox-factory-2.0.0-dist
       &amp;gt; Rewriting #!/nix/store/f2krmq3iv5nibcvn4rw7nrnrciqprdkh-python3-3.12.9/bin/python3.12 to #!/nix/store/f2krmq3iv5nibcvn4rw7nrnrciqprdkh-python3-3.12.9
       &amp;gt; wrapping `/nix/store/8v353k03iyxfkp0hlclidzh5ywzzys5j-gruvbox-factory-2.0.0/bin/gruvbox-factory&amp;#39;...
       &amp;gt; Executing pythonRemoveTestsDir
       &amp;gt; Finished executing pythonRemoveTestsDir
       &amp;gt; Running phase: installCheckPhase
       &amp;gt; no Makefile or custom installCheckPhase, doing nothing
       &amp;gt; Running phase: pythonCatchConflictsPhase
       &amp;gt; Running phase: pythonRemoveBinBytecodePhase
       &amp;gt; Running phase: pythonImportsCheckPhase
       &amp;gt; Executing pythonImportsCheckPhase
       &amp;gt; Check whether the following modules can be imported: gruvbox-factory
       &amp;gt; Traceback (most recent call last):
       &amp;gt;   File &amp;quot;&amp;lt;string&amp;gt;&amp;quot;, line 1, in &amp;lt;module&amp;gt;
       &amp;gt;   File &amp;quot;&amp;lt;string&amp;gt;&amp;quot;, line 1, in &amp;lt;lambda&amp;gt;
       &amp;gt;   File &amp;quot;/nix/store/f2krmq3iv5nibcvn4rw7nrnrciqprdkh-python3-3.12.9/lib/python3.12/importlib/__init__.py&amp;quot;, line 90, in import_module
       &amp;gt;     return _bootstrap._gcd_import(name[level:], package, level)
       &amp;gt;            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
       &amp;gt;   File &amp;quot;&amp;lt;frozen importlib._bootstrap&amp;gt;&amp;quot;, line 1387, in _gcd_import
       &amp;gt;   File &amp;quot;&amp;lt;frozen importlib._bootstrap&amp;gt;&amp;quot;, line 1360, in _find_and_load
       &amp;gt;   File &amp;quot;&amp;lt;frozen importlib._bootstrap&amp;gt;&amp;quot;, line 1324, in _find_and_load_unlocked
       &amp;gt; ModuleNotFoundError: No module named &amp;#39;gruvbox-factory&amp;#39;
       For full logs, run &amp;#39;nix log /nix/store/w9wfy7wp88mr8grz8vgmcdbdfqmv9hfj-gruvbox-factory-2.0.0.drv&amp;#39;.
error: 1 dependencies of derivation &amp;#39;/nix/store/i8rvnijpg9b9vg9z7czds5h5rqwmzacp-nix-shell-env.drv&amp;#39; failed to build
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Nix logs can be difficult to parse, but in this case the error was clear: when trying to develop the package, Nix could not find the module named &lt;code&gt;gruvbox-factory&lt;/code&gt;. This occurred during the &lt;code&gt;pythonImportCheckPhase&lt;/code&gt;. Upon checking my default.nix file, I noticed the pythonImportsCheck was configured to look for &amp;quot;gruvbox_factory&amp;quot; as the module name.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;     &amp;gt; wrapping `/nix/store/8v353k03iyxfkp0hlclidzh5ywzzys5j-gruvbox-factory-2.0.0/bin/gruvbox-factory&amp;#39;...
       &amp;gt; Executing pythonRemoveTestsDir
       &amp;gt; Finished executing pythonRemoveTestsDir
       &amp;gt; Running phase: installCheckPhase
       &amp;gt; no Makefile or custom installCheckPhase, doing nothing
       &amp;gt; Running phase: pythonCatchConflictsPhase
       &amp;gt; Running phase: pythonRemoveBinBytecodePhase
       &amp;gt; Running phase: pythonImportsCheckPhase
       &amp;gt; Executing pythonImportsCheckPhase
       &amp;gt; Check whether the following modules can be imported: gruvbox-factory
       &amp;gt; Traceback (most recent call last):
       &amp;gt;   File &amp;quot;&amp;lt;string&amp;gt;&amp;quot;, line 1, in &amp;lt;module&amp;gt;
       &amp;gt;   File &amp;quot;&amp;lt;string&amp;gt;&amp;quot;, line 1, in &amp;lt;lambda&amp;gt;
       &amp;gt;   File &amp;quot;/nix/store/f2krmq3iv5nibcvn4rw7nrnrciqprdkh-python3-3.12.9/lib/python3.12/importlib/__init__.py&amp;quot;, line 90, in import_module
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;  pythonImportsCheck = [
    &amp;quot;gruvbox-factory&amp;quot;
  ];
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Initially, I assumed the module name would match the package name (&amp;quot;gruvbox-factory&amp;quot;), since that seemed logical. However, upon checking the pyproject.toml file when the error occurred, I discovered that the actual module name defined in the setuptools configuration was different. The package used a different module name internally than what was exposed in the package name.&lt;/p&gt;
&lt;p&gt;Checking the pyproject.toml revealed the actual module name:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-toml&quot;&gt;[tool.setuptools.packages.find]
include = [&amp;quot;factory&amp;quot;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I updated the pythonImportsCheck in default.nix accordingly:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;pythonImportsCheck = [
    &amp;quot;factory&amp;quot;  # Changed from &amp;quot;gruvbox-factory&amp;quot;
];
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After this change, running &lt;code&gt;nix develop&lt;/code&gt; succeeded. I proceeded to run &lt;code&gt;nix build&lt;/code&gt; to create the package and make it accessible in my shell.&lt;/p&gt;
&lt;h2&gt;Building the Package&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;nix build \
  --impure --expr \
  &amp;#39;let pkgs = import &amp;lt;nixpkgs&amp;gt; { }; in pkgs.callPackage ./default.nix {}&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Successful Build Result&lt;/h3&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;~/Documents/01_Project/Builds/default  ls
.rw-r--r-- 1.2k thein3rovert 12 Apr 11:45  default.nix
.rw-r--r-- 1.0k thein3rovert 12 Apr 11:17  flake.lock
.rw-r--r--  586 thein3rovert 12 Apr 11:10  flake.nix
lrwxrwxrwx    - thein3rovert 12 Apr 11:48  result -&amp;gt; /nix/store/1wjcirbfks8xp2svp4qd9q2snq4j1j7i-gruvbox-factory-2.0.0
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Testing the Package&lt;/h2&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;~/Documents/01_Project/Builds/default  gruvbox-factory
usage: gruvbox-factory [-h] [-p [{white,pink,mix}]] [-i IMAGES [IMAGES ...]]
A simple cli to manufacture Gruvbox themed wallpapers.
options:
  -h, --help            show this help message and exit
  -p [{white,pink,mix}], --palette [{white,pink,mix}]
                        choose your palette, panther &amp;#39;pink&amp;#39; (default), snoopy &amp;#39;white&amp;#39; or smooth &amp;#39;mix&amp;#39;
  -i IMAGES [IMAGES ...], --images IMAGES [IMAGES ...]
                        path(s) to the image(s).
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note: This is only available through the shell when I exit the shell the built package will not be available. While I didn&amp;#39;t try other options to fix the issue with the dependencies, I might try the options some other time when I run into similar issues regarding dependencies and I will be sure to make a blog about how I used it so stay tuned... and have a great day.&lt;/p&gt;
&lt;h3&gt;Resources&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://tech.aufomm.com/my-nix-journey-how-to-use-nix-to-set-up-dev-environment/&quot;&gt;https://tech.aufomm.com/my-nix-journey-how-to-use-nix-to-set-up-dev-environment/&lt;/a&gt;&lt;br&gt;&lt;a href=&quot;https://github.com/paulopacitti/gruvbox-factory&quot;&gt;https://github.com/paulopacitti/gruvbox-factory&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>No Space Left on Device</title><link>https://blog.thein3rovert.dev/posts/nixos/resolving-no-space-left-on-device-error-on-nixos/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/resolving-no-space-left-on-device-error-on-nixos/</guid><description>Learn how to resolve the &apos;No space left on device&apos; error on NixOS by optimizing tmpfs and swap space for seamless package installations.</description><pubDate>Mon, 12 Aug 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently, I encountered a &amp;quot;No space left on device&amp;quot; error while trying to install IntelliJ on my NixOS system. Despite having 70 GB of available space, the issue persisted. As a newcomer to NixOS, dual-booting with Windows, I was puzzled by this problem.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Understanding the Issue:&lt;/strong&gt;&lt;br&gt;The error wasn&amp;#39;t due to a lack of total disk space but was likely related to the temporary directory (/tmp) or the Nix store (/nix/store) running out of space. This is a common issue in NixOS, especially with large package installations. I suspected the Nix store was full, so I tried the following commands to free up space:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;nix store --optimize&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;nix store --gc&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Unfortunately, these didn&amp;#39;t resolve the issue.&lt;br&gt;I turned to Reddit and other online resources, hoping to find others who faced similar issues. However, documentation was sparse, and solutions were not clearly explained.&lt;br&gt;&lt;strong&gt;Solutions Tried:&lt;/strong&gt;&lt;br&gt;Here are some options I explored:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Checking /tmp Directory:&lt;/strong&gt; Ensured it wasn&amp;#39;t full.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increasing Nix Store Size:&lt;/strong&gt; Considered adjusting the Nix store size.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clearing Cache:&lt;/strong&gt; Attempted to clear any unnecessary cache files.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increase the tmpfs size&lt;/strong&gt;: You can try increasing the tmpfs size by adding the following line to your &lt;code&gt;/etc/nixos/configuration.nix&lt;/code&gt; file:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;boot.runSize = &amp;quot;10G&amp;quot;;  # adjust the size as needed
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, restart your system and try installing the package again.&lt;/p&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use a larger swap partition&lt;/strong&gt;: If you have a swap partition, ensure it&amp;#39;s large enough to accommodate the temporary build process. You can check the current swap size using &lt;code&gt;swapon -s&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mount the Nix store with a larger size&lt;/strong&gt;: You can remount the Nix store with a larger size using the following command:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;mount -o remount,size=10G,noatime /nix/.rw-store
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Adjust the size as needed.&lt;/p&gt;
&lt;ol start=&quot;4&quot;&gt;
&lt;li&gt;&lt;strong&gt;Clear temporary files&lt;/strong&gt;: Try clearing temporary files and directories, including &lt;code&gt;/tmp&lt;/code&gt; and &lt;code&gt;/var/tmp&lt;/code&gt;, to free up space.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;None of those solution seem to solve the problem i was having each time i build a package I keep getting the no space left on device error. Then i came across a blog post that solve the issue i was having.&lt;br&gt;&lt;strong&gt;Solutions Explored:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Increase tmpfs Size:&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Explanation:&lt;/strong&gt; tmpfs stores files in volatile memory. Increasing its size allows more files to be stored in memory.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How to Apply:&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;Add the following line to &lt;code&gt;/etc/nixos/configuration.nix&lt;/code&gt;:&lt;pre&gt;&lt;code class=&quot;language-nix&quot;&gt;boot.tmpfsSize = &amp;quot;4G&amp;quot;;  # Adjust this size as needed
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;Apply the changes with:&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo nixos-rebuild switch
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Consideration:&lt;/strong&gt; Ensure enough free RAM or swap to support the increased size.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Increase Swap Space:&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Explanation:&lt;/strong&gt; Swap space acts as overflow memory when RAM is full, supporting resource-heavy operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Check Temporary File Storage:&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use the following command to check current tmpfs usage:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;df -h | grep tmpfs
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If low, increase &lt;code&gt;/tmp&lt;/code&gt; size by setting &lt;code&gt;boot.tmpfsSize&lt;/code&gt; in the NixOS configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;After adjusting the tmpfs size, the error was resolved. This experience taught me valuable lessons about managing disk space on NixOS, and I hope it helps others facing similar issues.&lt;/p&gt;
&lt;h4&gt;Resources&lt;/h4&gt;
&lt;p&gt;&lt;a href=&quot;https://nixos.wiki/wiki/Storage_optimization&quot;&gt;https://nixos.wiki/wiki/Storage_optimization&lt;/a&gt;&lt;/p&gt;
</content:encoded></item><item><title>ddutils on Nixos</title><link>https://blog.thein3rovert.dev/posts/nixos/how-to-set-up-ddutils-on-nixos/</link><guid isPermaLink="true">https://blog.thein3rovert.dev/posts/nixos/how-to-set-up-ddutils-on-nixos/</guid><description>This post provides a guide on installing and configuring ddcutil on NixOS</description><pubDate>Sun, 04 Aug 2024 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;Installation and Configuration&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Search nix search for &lt;code&gt;ddcutil&lt;/code&gt; package or simple add this command to your nix-configuration pkg config.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt; environment.systemPackages = with pkgs; [
	ddcutil
];
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add it to configuration.nix and sudo rebuild&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo rebuild switch
&lt;/code&gt;&lt;/pre&gt;
&lt;ol start=&quot;2&quot;&gt;
&lt;li&gt;Verify it its installed by running &lt;code&gt;ddcutil detect&lt;/code&gt; command, if you get an error like unable to find &lt;code&gt;/ddc/dev/12c&lt;/code&gt;. directory.&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo ddcutil detect
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should get something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;   I2C bus:  /dev/i2c-4
   DRM connector:           card1-HDMI-A-1
   EDID synopsis:
      Mfg id:               SEM - Samsung Electronics Company Ltd
      Model:                DM700A-H
      Product code:         804  (0x0324)
      Serial number:
      Binary serial number: 0 (0x00000000)
      Manufacture year:     2012,  Week: 0

Invalid display
   I2C bus:  /dev/i2c-12
   DRM connector:           card1-eDP-1
   EDID synopsis:
      Mfg id:               BOE - BOE
      Model:
      Product code:         2449  (0x0991)
      Serial number:
      Binary serial number: 0 (0x00000000)
      Manufacture year:     2020,  Week: 33
   This is a laptop display.  Laptop displays support DDC/CI
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you get that error that says you dont have the &lt;code&gt;i2c&lt;/code&gt; kernel modules ,we need to load it manually by using the following command&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo modprobe i2c-dev
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;you should get an i2c kernel modules 1 - 16 .&lt;/p&gt;
&lt;p&gt;You can verify your &lt;code&gt;i2c&lt;/code&gt; by usng this command this will list all the i2c needed.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;`lc /dev/12c-*`
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then run the i2c detect again, you should get information about all the monitor you&amp;#39;re connected to.&lt;/p&gt;
&lt;p&gt;Before setting the brightness, make sure that you know the current brightness you are currently i so make sure you run the command below,  this will show you the current brightness if your monitors.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo ddcutil getvpc 10`
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After that then try increase and decrease the brightness by running the&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo ddcutil setvcp 10 +100 #Higher number
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;for increased brightness.&lt;/p&gt;
&lt;p&gt;Then&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo ddcutil setvcp 10 +10 # lower number
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or increase brightness.&lt;/p&gt;
&lt;p&gt; You can also verify if your monitor supports brightness control by running the&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo ddcutil capabilities | grep &amp;quot;Feature: 10&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next thing to DO is to create a &lt;code&gt;udev&lt;/code&gt; rules, this gives &lt;code&gt;i2c&lt;/code&gt; groups the read and writer permissions so non-root users can access and control the I2C devices and using the tools like &lt;code&gt;ddcutil&lt;/code&gt; without having to use &lt;code&gt;sudo&lt;/code&gt; everytime. We can do that by using this command.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;sudo cp /usr/share/ddcutil/data/60-ddcutil-i2c.rules /etc/udev/rules.d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you encounter an error in this process it because &lt;code&gt;nixos&lt;/code&gt; has a different way of handling file systems and package management so we need to find the &lt;code&gt;share&lt;/code&gt; directory. Basically it cannot find the share folder that contains the data we want to copy so we have to find the share folder. After a successful research..ha foufn out the if the share folder is not in the&lt;code&gt; /usr/share/&lt;/code&gt; then it can be found in the &lt;code&gt;/run/current-system/sw/share/ddcutil&lt;/code&gt; son then we can run this command again with the correct path.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;sudo cp /run/current-system/sw/share/ddcutil/data/60-ddcutil-i2c.rules /etc/udev/rules.d
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On other &lt;code&gt;linux&lt;/code&gt; distro this approach will work but as we all know as a nix user every thing got to be declarative so what we will have to do in other to solve the error.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo cp /run/current-system/sw/share/ddcutil/data/60-ddcutil-i2c.rules /etc/udev/rules.d
cp: cannot create regular file &amp;#39;/etc/udev/rules.d/60-ddcutil-i2c.rules&amp;#39;: Read-only file system
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We need to add the following line to our configuration.nix files.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-conf&quot;&gt;services.udev.packages = [ (pkgs.runCommand &amp;quot;custom-udev-rules&amp;quot; { buildInputs = [ pkgs.coreutils ]; } &amp;#39;&amp;#39; mkdir -p $out/lib/udev/rules.d cp ${pkgs.ddcutil}/share/ddcutil/data/60-ddcutil-i2c.rules $out/lib/udev/rules.d/ &amp;#39;&amp;#39;) ];
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will add the &lt;code&gt;60-ddcutil-i2c.rules&lt;/code&gt; directly to our udev rule to your NixOS system.&lt;br&gt;After doing that save the changes and run the following command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-sh&quot;&gt;sudo nixos-rebuild switch
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After rebuilding in other to apply the changes we need to reboot our system but there is a better way to reload the &lt;code&gt;i2c&lt;/code&gt; without rebooting out system.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo groupadd --system i2c

sudo usermod $USER -aG i2c
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You might be wondering why its important to creating i2c group or groups in general, this is because creating groups are important steps that should be taken for managing permissions perfectly on devices and their resources. In relation to the i2c, some devices require group permission in other to access &lt;code&gt;i2c&lt;/code&gt; devices, &lt;code&gt;ddcutil&lt;/code&gt;&lt;br&gt;needed access to i&lt;code&gt;/dev/i2c-*&lt;/code&gt; devices in other to interact with monitor settings.&lt;/p&gt;
&lt;p&gt;Now that we have created a group for ic2 we need to verify the group we can do that use the command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;groups $USER
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can also check the &lt;code&gt;i2c&lt;/code&gt; group file to see if the user is listed in the group:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;grep i2c /etc/group
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should get a result like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;i2c:x:544:$USER
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next we need to also make sure we can load the &lt;code&gt;i2c-dev&lt;/code&gt; automatically, we can do that using this command:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo touch /etc/modules-load.d/i2c.conf
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;sudo sh -c &amp;#39;echo &amp;quot;i2c-dev&amp;quot; &amp;gt;&amp;gt; /etc/modules-load.d/i2c.conf&amp;#39;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then we reboot for the changes to take full effect&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo reboot
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Resources&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.ddcutil.com/i2c_permissions_using_group_i2c/&quot;&gt;https://www.ddcutil.com/i2c_permissions_using_group_i2c/&lt;/a&gt;&lt;br&gt;&lt;a href=&quot;https://discourse.nixos.org/t/proper-way-to-access-share-folder/20495&quot;&gt;https://discourse.nixos.org/t/proper-way-to-access-share-folder/20495&lt;/a&gt;&lt;br&gt;&lt;a href=&quot;https://github.com/daitj/gnome-display-brightness-ddcutil/blob/master/README.md&quot;&gt;https://github.com/daitj/gnome-display-brightness-ddcutil/blob/master/README.md&lt;/a&gt;&lt;br&gt;&lt;a href=&quot;https://search.nixos.org/packages?channel=24.05&amp;show=gnomeExtensions.brightness-control-using-ddcutil&amp;from=0&amp;size=50&amp;sort=relevance&amp;type=packages&amp;query=ddcutil&quot;&gt;https://search.nixos.org/packages?channel=24.05&amp;amp;show=gnomeExtensions.brightness-control-using-ddcutil&amp;amp;from=0&amp;amp;size=50&amp;amp;sort=relevance&amp;amp;type=packages&amp;amp;query=ddcutil&lt;/a&gt;&lt;br&gt;&lt;a href=&quot;https://github.com/NixOS/nixpkgs/issues/292049&quot;&gt;https://github.com/NixOS/nixpkgs/issues/292049&lt;/a&gt;&lt;/p&gt;
</content:encoded></item></channel></rss>