Compare commits

...

8 Commits

Author SHA1 Message Date
f9478b133f not a draft 2026-03-23 21:55:44 +00:00
009cb0d31f new article 2026-03-23 21:54:55 +00:00
d601a28f5c not a draft 2026-03-21 21:21:01 +00:00
f02418421e new article 2026-03-21 21:20:19 +00:00
e66d761b8f removed .session file 2026-03-21 16:09:50 +00:00
9fd41b7742 categories need to be lower case for some reason 2026-03-21 16:08:29 +00:00
dd1a0d984d trying to make category searching work 2026-03-21 16:04:00 +00:00
6c62e2208c changes 2026-03-21 15:23:34 +00:00
6 changed files with 428 additions and 61 deletions

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.session

View File

@@ -1,53 +0,0 @@
let SessionLoad = 1
let s:so_save = &g:so | let s:siso_save = &g:siso | setg so=0 siso=0 | setl so=-1 siso=-1
let v:this_session=expand("<sfile>:p")
silent only
silent tabonly
cd ~/Documents/voidarc/content
if expand('%') == '' && !&modified && line('$') <= 1 && getline(1) == ''
let s:wipebuf = bufnr('%')
endif
let s:shortmess_save = &shortmess
if &shortmess =~ 'A'
set shortmess=aoOA
else
set shortmess=aoO
endif
badd +100 posts/site.norg
argglobal
%argdel
edit posts/site.norg
argglobal
setlocal foldmethod=expr
setlocal foldexpr=v:lua.vim.treesitter.foldexpr()
setlocal foldmarker={{{,}}}
setlocal foldignore=#
setlocal foldlevel=99
setlocal foldminlines=1
setlocal foldnestmax=20
setlocal foldenable
21
sil! normal! zo
31
sil! normal! zo
let s:l = 100 - ((26 * winheight(0) + 27) / 54)
if s:l < 1 | let s:l = 1 | endif
keepjumps exe s:l
normal! zt
keepjumps 100
normal! 0111|
tabnext 1
if exists('s:wipebuf') && len(win_findbuf(s:wipebuf)) == 0 && getbufvar(s:wipebuf, '&buftype') isnot# 'terminal'
silent exe 'bwipe ' . s:wipebuf
endif
unlet! s:wipebuf
set winheight=1 winwidth=20
let &shortmess = s:shortmess_save
let s:sx = expand("<sfile>:p:r")."x.vim"
if filereadable(s:sx)
exe "source " . fnameescape(s:sx)
endif
let &g:so = s:so_save | let &g:siso = s:siso_save
doautoall SessionLoadPost
unlet SessionLoad
" vim: set ft=vim :

View File

@@ -8,7 +8,7 @@ categories: [
meta
]
created: 2026-03-21T11:29:19+00:00
updated: 2026-03-21T11:31:37+0100
updated: 2026-03-23T20:55:01+0100
draft: false
layout: home
version: 1.1.1
@@ -21,4 +21,4 @@ version: 1.1.1
If you can read this, server02 hasn't crashed (yet) :)
test for webhook
Email always open: admin@voidarc.co.uk (if my mailserver is up, which is sometimes)

194
posts/homelab.norg Normal file
View File

@@ -0,0 +1,194 @@
@document.meta
title: How I fell in with Linux
description: Haha war of the worlds reference
authors: [
Adumh00man
]
categories: [
blog
voidarc
homelab
linux
webdev
]
created: 2026-03-21T19:08:28+00:00
updated: 2026-03-21T21:20:48+0100
draft: false
layout: post
version: 1.1.1
@end
* Some Backstory
Linux is not hard to learn. Some say it is, but they're wrong. If one can be afraid of a terminal, then I must be the bravest man
alive. But, I digress. I began many moons ago, when I was still young, innocent, and didn't know what nix was (what precious days).
I began tinkering with some ubuntu virtual machines, but, since I didn't have much of a use for them, I stuck to the desolate landscape
of windows, which was still pretty good at the time, to be fair. This was in the dark days before windows 11 struck, and before RAM was
£3000. Skip forward, to when I decided that building a PC was a good use of money. I will admit, it was a steal, even for the time. I
was doing LTT challenges before they were even popular, managing to build a fully functional machine for the low low price of about £700.
In what was to be the first issue I had ever faced, I decided to install windows. Foolish.
That is not the point of this story, though. Fast forward again, to when I aqquired, through some sheer luck, a relative's old desktop.
It was, of course, a piece of shit. 4gb of ram, something something pentium. It was hardly anything to write home about. The only thing
it had going was that it had a whole terabyte of storage hidden within, a drive I am still using 4 years on, as of writing this, to store
all of my *legally* aqquired media (I'll get to it later). I had the brilliant idea of turning it into a server. Of course, instead of
doing anything moderately intelligent, I installed ubuntu on it, and accessed it over rdp for about 6 months. I know, revolutionary.
This was the legendary Server01, or, at least, the first iteration.
** The first iteration
As far as I can remember, I never really hosted anything interesting for the first stretch of time. But, after seeing my success in
hosting a whole docker container (thanks networkchuck), the linux addiction began to take hold of me. I scoured facebook marketplace for
anything that could satiate my urge to fuck about with json more efficiently, until I came across a diamond in the rough. For a measly
£20, an old workstation. A HP Optiplex. Server02.
8 whole gigs of ram, a (moderately) more powerful processor, and even more storage. It was a dream come true. It was around this time,
too, that I installed either Arch or fedora on my laptop (All I can remember is that it had kde and was a laggy piece of shit), which
let me get used to how linux actually functioned. After another banger video from networkchuck, detailing the installation of a
load balancer, the Kemp Loadmaster (I would come to hate that name), I finally had the inspiration to reinstall everything. The second
iteration had begun.
** The one with Alpine
I couldn't tell you why, but I got it in my head that Alpine was the lightest distro. Knowing that my "servers" (if you could call them
that) were basically E waste, I figured that alpine was the obvious choice, because of how fast it was, or something. It probably
had something to do with all the docker images that were based on it, but now I know that they're only based on alpine for the sake of
it. Alpine is a small distro, sure, but for that storage space, you sacrifice basically everything you need to live. I.e. packages.
Even so, I transfered the kemp install over to a usb and began the installation. Both servers kitted out with only the finest alpine
images money could buy. I was quick to install docker and get up basically the only service that I had at the time, plex.
*** Plex
Plex is shit. I can see that now. Before I knew it was propriotary nonsense, I thought it was amazing. Fast, efficient, something
something. Who am I kidding, I saw it on an LTT video and installed it immediately. I can't remember what I even hosted on there,
since it was way before I installed the arr stack or anything like that, but I switched over to my love jellyfin a few months later.
---
Alpine lasted a good while. I was still on windows at this point, and only really used the servers for silly little services. Before that,
though, I need to explain a key part of the plan
** Humble beginnings
In the same video explaining the load balancer, Chuck showed me how to get a free domain. I needed a name, and voidarc was something I
had recently cooked up for some story or another, so I went with that. As you can see, it stuck. Hence, voidarc.tk was born.
Voidarc.tk was to become a core part of my life. Using the newfound power that came with a commercial load balancer, I began adding
services left right and center, services that would follow me, if not uselessly, all the way through until the co.uk days.
Some notable services included: kasm, a remote app gateway, which was fun while it lasted, but alltogether useless; portainer, that I
never once used to start a container; code-server, where I ended up doing most of my development and management, just to name a few.
The domain also allowed me to host public minecraft servers, the domains for which had remained unchanged until a few months ago.
A few attempts at making an ssh server over https were made, but none were sucessful, because I was doing it wrong.
** The linux era
By this time, it was the summer of 2023. My laptop had Arch on it by now, after going through a phase of being a windows machine
because I couldn't figure out intel drivers, and I was on holiday. This was the turning point. This was the holiday that turned me into
a loser. For the first week, nothing interesting happened, the same for the next week. I was using the code server I had set up to
program one thing or another, but the fact I was using linux at all was completely lost on me. It wasn't until that fateful day when
someone I knew on the campsite offered to show me her laptop that it clicked. She brought out a thinkpad, and displayed on the screen
was a linux desktop the likes of which I had never seen before. The linux desktop that would end me.
Hyprland
*** The transition to Linux
I didn't know at the time what hyprland was, or how it worked, but after seeing such a beautiful system with my own mortal eyes, I knew
I wanted in. I researched, coming across sway and some other nonsense I ended up not using. After a trip onto r/unixporn, I realised
that almost everyone was using hyprland. So, I did the same. I decided, still on holiday, to reinstall arch with hyprland and configure
it, so that when I got back home, I could install it on my main machine.
Needless to say, it was a success. In the beginning, there was a lot of stealing configs, as is normal, because I had no idea what I
was doing. The day came, and I erased windows for the final time. Of course, I used archinstall, but it hardly made a difference.
The first day was spent trying to figure out modesetting so that my gpu, a GTX 1070, would work with wayland. I hate kernel options.
A time came when I decided that there needed to be a change, so I installed a complete solution, called hyprdots. This is when I realised
that I hated that idea of bloat. I understood the minimalists! But anyway, this is an article on servers, not other random shit.
I digress.
** The penultimate era: the plateu
I realised eventually that Alpine was also shit. So, in a frantic evening of usb installers, I switched server02 over to arch. And so it
remains to this day. I wouldn't be surprised if there were still remnants of zammad lurking in the /etc folder. Zammad was a ticketing
system that was a royal pain in my ass to install, and broke at every possible convenience. Don't even know why I needed it in the first
place, if I'm honest. Server01 remained on alpine, and still does for some reason, but only because I can't be bothered to wait for my
home assistant image to copy over to a usb. Not happening with usb2 speeds. There was a period where I tried to use proxmox on server01,
since I only wanted vms on there, but it was too slow to bother, so I gave in to the alpine gods and removed that kvm entry. This is where
development began to slow to a halt. I had bought a .co.uk domain by now, and the services I used were now set in stone. I wasn't using
jellyfin daily yet, or really anything. The most I used my servers was to watch some manually downloaded show that I could've just found
online. I believe that that was how I first watched JJK.
I had a download manager in some docker container that I had to manually add torrents to so that they would download. What trying times.
I definately didn't know what the arr stack was even for back then, but I trudged on. Some other notable services included:
- Portainer (still didn't use it)
- homepage, for a time
- code-server, obviously
- shellinabox, probably the best thing ever made.
And that's where it stayed for a while. I made no changes, and no efforts to remove the invasive nonsense from my life.
How sad :(.
** De-corporatifying
There came a time when I got fed up of the overpriced nonsense shit that the companies of the world believe that we should put up with.
I wanted freedom. I wanted a better life. And more importantly, I didn't want a gruvbox themed system anymore. So, everything changed
I reinstalled arch on my main pc. I re-themed everything. But, most importantly, I purged a load of shit that I didn't use anymore.
Of course, this made me install a load of other random shit that I used for all of 10 minutes, but that's the price you pay when you
self-host. I stripped down, finally removed portainer, and focused on making my servers as useful as possible. I even automated replacing
the ip on cloudflare when *fucking vodafone* decided it was funny to change it, and not give an option to have a static ip in the first
place. This system is, of course, still in use to this day, and is using node.js and a cron job instead of anything more logical. What
works, works, but I should probably put it on n8n someday.
I started to use Jellyfin more frequently, and get away from shit social media. I uninstalled snapchat, leaving me with only whatsapp and
reddit, which is all the modern man really needs. No more youtube shorts, either. I got into deadmau5, too, but that's unrelated.
It was around this time, too, when I decided hosting a mailserver would be a fun thing to do. It was not, and still barely functions to
this day.
Then, the final evolution occured.
** The last straw
I was interested in nix when i first heard of it. Imagine, a whole system, version controlled and reproducible on any machine. I know now
that that was a stupid way to think about it, but it began as it always did, with an install on my laptop. It seemed simple enough,
and I got a hyprland system working fairly quickly. Of course, the evolution of my nix config is a whole other -story- article, so I'll
save it for now. I believe once I installed nix, things began to accelerate. I began to wonder what else could be replaced. Could endless
reddit feeds be replaced by something more interesting? Could manual downloads be automated? Could I be free of the mortal coil that is
Microslop and Github? I began to experiment, on the old trusty server02.
** Current era: before proxy manager
It all started when I finally realised what the Arr stack actually was. I had it set up within the day, importing all of my shows. More
importantly, I realised that I could automate downloading the albums that were so hard to source on mobile. It began with deadmau5, then
some other artists, and before long I had a library of 800 songs that I could listen to without the permission of some old man in a
suit (I was using auxio before then, which was open source, but shush). That was the first service that I replaced.
Then came the second. After finishing JJK for the second time, I wanted to see the rest of the story. So, I downloaded the manga. To my
server of course. I fired up komga, and then changed my mind when the other options were worse. I finished the manga in about 2 months,
before moving onto one punch man, and now one piece (which is peak btw). There was another purge somewhere in the middle of all of this,
but I was hardly using any of the other services anyway.
Finally, there was git. Gitea I mean. My github had become bloated with a load of shit that I didn't want anymore, like old dockerfiles
and non-functional web games. So, I left it all behind. Gitea was a breeze to set up, and I only had to change over a few links on
my local repos in order for it to work. At long last, my config was owned by me. Not microslop
** The final realisation
Kemp loadmaster had seen me through over 4 years of trials and tribulations. It was tried and true, and worked even better when you added
a * entry in cloudflare. However, it was on one fateful night that I sight of a reddit thread complaining about speed on the Free tier.
I clicked, aprehensively, and what I saw next truly rocked me to my core.
Free tier bandwidth cap.
I immediately googled it, and to my utter shock, The free tier had always been limeted to a measly 20mbps. For reference, I was paying
for 900. No wonder my sites were slow, I thought, no wonder it took 2 minutes to load a word document. I had always attributed responsiveness
to the shit cpu in server02. I would never have thought that my oldest ally would betray me so. So, with a heavy heart, and a somber funeral,
I pulled the plug on the kemp loadmaster for the final time.
** Modern day Homelabbing
Nginx proxy manager was something I had tried to use before, but not understood in the slightest. Turns out, before, I was putting the
urls in the wrong boxes. Once I had switched it over, and given my home assistant vm the ram it deserved, I was finally free.
Cryptpad actually loaded! Git pulls were as fast as they were on github! I was free! Thank God, I was free.
Nowadays, I host only what I need on my servers, the latest of which being this Blog site. The only thing I don't use is syncthing, but
you never know when that might come in handy. And yes, I did figure out how to use ssh over cloudflare, and do on a regular basis.
The Arr stack and jellyfin save me from having to crawl through the pirate bay every time I want to watch a movie, and Cryptpad saves me
from the affront to humanity that is onedrive. The last thing I still have to do is switch to grapheneos, but that would be a whole
article within itself too.
** The moral of the story
If I were a youtuber, I would give some do's and dont's on how to set up your own homelab. But, I'm not a youtuber, or knowledgeable.
So, If I had to give some advice, I would say 2 things:
If you're going to start, then do it. If you wait until you're ready, you'll be waiting for the rest of your life.
And, whatever you do...
Stay the fuck away from Zammad.

167
posts/nix.norg Normal file
View File

@@ -0,0 +1,167 @@
@document.meta
title: Why Nixos is the Coolest Operating System
description: I really like Nix
authors: [
Adumh00man
]
categories: [
linux
blog
nix
]
created: 2026-03-22T19:35:56+00:00
updated: 2026-03-22T19:43:25+0100
draft: false
layout: post
version: 1.1.1
@end
* The Nixos Philosophy
Nix is the all in one solution to every problem you could have with Linux. It prides itself on being a fully declarative way to install
and manage system packages, and It (mostly) achieves those goals. Before trying to involve oneself in such matters, though, an important
distinction to make is that Nix as a concept, and a package manager, is seperate from Nixos as a system. Nix can, and should, in some
cases, exist outside of Nixos. But, let's get onto that later.
** Origins
If you wanted a full rundown on how nix came about, then you're in the wrong place. Go read wikipedia or something. The main thing that
you need to know is that the nix language, and therefore the packaging system, was made on a whim for a university dissertation. First
of all, that's pretty insane. Making an entire programming language for the sake of it, and I wouldn't even be surprised if he didn't
get full credit. It quickly caught the attention of all the linux nerds out there, and before long it had evolved into the nix project
that we know and love today.
** Reasoning and Methodology
The main draw of nix as a concept is it's reproducability. It works similarly, and often in tandem with, Git. The git working tree is a
simple concept. A long ledger of changes (more commonly called diffs), that when put together results in a full codebase. Nix uses this
principle to its advantage, and I will demonstrate this by comparing nixpkgs with another common package repo, the AUR.
*** The AUR
The AUR is the Arch User Repository, a gigantic collection of packages uploaded by anyone that can be asked to figure out how PKGBUILDS
work. PKGBUILDS are a form of makefile, that is used as a standard way of declaring dependancies, versions wherein, and how to build
a given app. These PKGBUILDS are stored within the repos they are associated with, and are therefore versioned alongside the app.
When you upload an app to the AUR, and it is approved, a static version of your app, ie one commit, is hosted on the aur domain. This
url is read only, and cannot be viewed like something hosted on github. To install the app, you clone the repo and use `makepkg -si` to
compile.
The key part of all of that is that only one commit is hosted at a time. Because arch is a "rolling release" distro, they
see no need to waste storage on older versions of apps. This is an issue, when there is a breaking change and you cannot roll back
to an older version of a package, because it no longer exists. This is remedied slightly by the pacman cache, but that only goes for
your local machine. If you need a specific version of an app, then either compile from source or give up.
But, I hear you ask, what happened to git versioning? Can't you just roll back to an older commit and use the PKGBUILD from there?
This, my friend, is where Nix comes in.
*** Nixpkgs
Nixpkgs is, without exxageration, a git repository. Every time you install a package, the entire repo is pulled to your local machine.
Despite the fact that, at the time of writing, it has over 200,000 packages, it is still small enough, because of how diffs work, to
be downloaded every time you need to update an app. The reason for this is that it leverages git histories to their maximum potential.
Git, as I said, is just a long ledger of changes. All diffs are stored in the .git folder, along with some other stuff that might be
important to someone someday. Every time you make a commit, a new hash is generated. It isn't important how the hash is generated,
just that every commit has a unique hash associated with it. This gives rise to revisions. With this hash, you can address a specific
state of the ecode at that commit, and this is what nix uses to stay so small.
Every package has a .nix file, that details its build process. This can be as simple as compiling one file to a binary, all the
way up to compiling the whole linux kernel. Nix in of itself is a wrapper for every other build system, which means that any app
that can be built can be run on nix. Hence, when enough apps are compiled, nixpkgs becomes completely self sufficient. For an app
to be built with nix, all of its dependancies, and the builder itself, must also be built with nix. This means that, instead of using
a manner of different build systems, and rolling release nonsense, that means that you can never be sure if you're getting the same
version of a dependancy an app was developed with, you simply call another nix file, at a specific rev, or commit.
This leads to what is known as a dependancy chain. Because you call every dependancy with a specific rev, you can make sure that you
are getting the right version. When you get down to the actual source code, that isn't nix, you download the specified rev of the third
party repo, and compile that. If an app compiles once, it will always compile, because the git ledger should never be changed manually.
** In practice
Due to the existence of flakes, something that I'll get to later, this is made much easier. No more "it works on my machine", because,
for all intents and purposes, all machines are identical when running nix. There are no outside dependancies, so there is no way
for an app to fail to compile. Nix as a package manager can be used on other systems, being monolithically configured with one file,
that can be shared and rebuilt on any other machine with (almost) exactly the same functionality.
Hang on, this sounds awfully familiar. Like a problem that system integrators have been trying to solve for years, but could only come
so close. And so, the quest for a system built with Nix began.
** Nixos, the coolest operating system
Nixos is what came of that endeavour. A system fully defined by code. A system that could be rebuilt over and over, with exactly the
same result, on multiple machines. This had the added benefit of the same git methodology that nixpkgs was based on. No more updates
that rendered your computer unusable! You could simply revert back to a previous revision of your system, using the same diff structure
that git uses, but with boot entries and packages instead of just code.
Nixos is the pinnacle of stability. By definition, there can't be a more stable system. By virtue of the fact that It can be rolled back,
it cannot be defeated in that regard. It is impossible to have conflicting packages, because, in theory, all packages can be installed
alongside eachother. If you want to install 50 different versions of the linux kernel at once, then go ahead! If you want to run 10
different versions of firefox on the same machine, there's nothing stopping you! You get all of the benefits of rolling release, while
also having the option of installing a 5 year old version of onlyoffice in the same breath.
And the best part? Your system config, in of itself, can be controlled by git! It's git all the way down! I can instantly have the same
system I did 6 months ago if I wanted to, the only difference being the user files, of course. Ah, the user files.
** Home Manager: An attempt was made
Home manager is the logical extension of Nixos. A way to control dotfiles, or really any file, through nix. Revisions, versioning, etc.
It seems like a good idea on paper, until you try to use it.
First of all, the way that home manager works is exactly the same as the nix store. It creates a load of read only folders, and then
generates the immutable config files within. Seems reasonable, why would you want to change them in the first place? Two words: Lock
File. Nvim is the main culprit when it comes to issues like this. The Lazy.lock file can't update, because presumably the entire
`.config/nvim` folder was created by home-manager, meaning that updating packages is nearly impossible. All home manager leads to is a
massive headache at the end of the day. Especially when you come across some unsupported app, where you have to paste in the non-nix
config so that home-manager can copy and paste it into a read only file somewhere else in your home directory.
Because of these issues, I have taken a different approach, which I will now go on a tangent about.
** The Better way to manage dotfiles
Stow is an old GNU utility. It calls itself a "symlink farm", which is fancy speak for "it takes files in a directory and links them to
another place". The idea is that you have a folder with all of your dotfiles in, the ones that you actually care about, and stow links
them to your actual .config directory. I used this for a while, in order to manage my hyprland and nvim configs more nicely.
Due to the fact it's a GNU utility, however, it's really old. And doesn't have nice features. It was never intended to be a dotfiles
manager, so didn't have the killer feature that I would need to make a proper nix config. Multiple host support.
I have 2 machines that I have nix on. My laptop and my desktop. I want a similar experience between the two, but some things I want to be
different. Nixos thought of this, and made it so that you could have different outputs in a flake, that represented different systems.
But what about all my other nonsense? My monitor names are different between the two machines, and I had different keyboard styles, so
wanted different modifiers too! This could be remedied by loads of scripts, but that was boring. I needed something new. Something...
Intuitive.
** Doot: The Fast, Intuitive dotfiles manager
[Doot]{https://github.com/pol-rivero/doot} is my choice for a dotfiles manager. It's fast(ish), configurable, and, most importantly,
was built with multiple hosts in mind. If you want to see how it works more closeley, click the link at the start of this paragraph,
or have a look at my [Dotfiles]{https://git.voidarc.co.uk/voidarc/config}, which I think are pretty cool. For those of you that can't be
asked to have a look, though, I'll explain the concept as simply as I can be bothered.
Your doot repo is located in the `.dotfiles` directory, next to .config. When you run `doot install`, the folders in your repo are mapped
to your home directory in the same structure as they were in the repo. Eg, if you put a .zshrc in the root of the repo, it will end up
in the right place. So far, this is exactly the same as stow, the only difference being where stow would link a whole folder, doot
creates the structure in place and only symlinks files.
Doot can be configured in the doot folder, in the root of the repo, that will not be mapped to the home directory. The doot config
contains things like exclude files (like .gitignores and licences), the diff viewer you want to use to see file changes, and your
different hosts. Doot manages hosts intelligently, using the hostname of the machine as an indicator. Here is my host config:
@code toml
[hosts]
"HACKSTATION" = "pc-files"
"mobile02" = "laptop-files"
@end
The keys on the right correspond to the different hostnames, and the keys on the right correspond to the folders that contain the
dotfiles specific to that machine. These folders are located in the root of the repo, and within them, contain another structure
that is relative to your home directory. Eg, in the pc-files directory, there is also a `.config` directory, with some specific hyprland
configs that only work on my PC.
** What was I talking about again?
You could, of course, version every single file on your disk using this method, but that sounds like a waste of time. There will never
be a world where 100% of a system is always generated by nix, because there will always be some random app that only compiles to an
appimage and uses proprietary config locations or something. Nevermind how you would manage something like a download folder.
I believe that Nix, and Nixpkgs, is the gold standard when it comes to software development. There are genuine, real world uses for stuff
like this. Sites like replit are fully managed and distributed using nix, because of how stupidly stable it is. You can define whole
Kubernetes clusters on nix, and then update them in place with 0 downtime. Pretty impressive. However, as tends to happen with nerds
on the internet (me included), the concept and utility that comes along with something like nix is buried under who can make the lightest
or best looking config.
** Moral of the story (or something)
Nix isn't for everyone. If you don't care about versioning everything in your entire life, then you're probably better off sticking with
something like arch. If you don't mind that you have slightly different configs across different machines, then there's no need for you
to learn how to use flakes so that you can make sure that you have identical packages on all of your systems.
But, It sure is fun to mess around with. If you're on the fence about trying nix, then this is your indicator to switch. It will make
you feel a great deal of things, but regret will not be one of those feelings.

View File

@@ -5,14 +5,14 @@ authors: [
Adumh00man
]
categories: [
Blog
Neorg
Nvim
Voidarc
Webdev
blog
neorg
nvim
voidarc
webdev
]
created: 2026-03-21T11:39:17+00:00
updated: 2026-03-21T13:41:48+0100
updated: 2026-03-21T16:07:21+0100
draft: false
layout: post
version: 1.1.1
@@ -100,3 +100,61 @@ version: 1.1.1
Said service would be a simple nginx docker container on some random port, that serves whatever it is asked.
In order for such a service to be viable, I don't want to have to manage it manually. Instead, I can use a webhook to pull the html
repo whenever there is a new commit. This could also be on a timer, but instant gratification is better.
Unfortunately, nothing ever goes to plan when linux is involved. The first indicator that my system wasn't going to work out was the fact
that when you rebuilt the site, the repo within the public folder was erased. This is nonsense, and I have already opened an issue so
that when it's fixed I have an excuse to remake the whole system.
*** Nginx Tangent
Nginx config is at the same time overly-complex and too basic.
It's like python. It looks easy on the surface, but when you get into it, does things in the most roundabout way possible.
Complaints about python aside, nginx is a nightmare to deal with on a usual basis. This is exactly why there are an infinite number
of management solutions for it, one of which I am literally using to host this site (shoutout nginx proxy manager my love).
Unfortunately not all of these solutions can host files at the raw level like I intend to, so here we are, once again, having
to waste hours figuring out why the css is being denied.
For future reference, the nginx config I'm using is as follows:
@code nginx
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /config/www/public;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ =404;
autoindex on;
autoindex_exact_size off; # Displays file sizes in KB/MB/GB instead of bytes
autoindex_format html; # The standard web view
autoindex_localtime on; # Uses the server's local time for file dates
}
@end
To any experienced web developer, such as not myself, this would look like a war crime. Because it is.
This is 100% succeptible to every form of exploit one can think of. However, because I'm running this through nginx proxy manager,
I don't have to worry about such trivial things as "security" and "ssl". Everything is managed in a different docker container!
Thanks NPM.
*** Git tangent
Earlier, I said I wanted this to be automatic. And currently, it is. So, how?
I host my own shit. That's everything. Including Git.
Gitea is the least complex option that everyone seems to like enough to tolerate. It also has webhooks. And I have n8n. N8n is an
automation engine, that uses flows to achieve what can be done with basic intuition in a systemd service, or an assistant. Sadly, I
have neither the time nor money to hire someone to update git repositories for me, so I guess we're doing it with bullshit.
Gitea has the ability to trigger webhooks on a push event to a repo, and n8n can trigger commands when a webhook is triggered.
This has the side effect of letting me have a repo that, if pushed to, can trigger a webhook that opens firefox on my pc. As you can
tell, I shouldn't have permission to use automation tools. By hooking (no pun intended) n8n into gitea, I can run a command over ssh
every time the repo is pushed to, so the changes are pulled to the server automatically. The only caveat is that you can't minify
the build output, because if you do, any change will rewrite the whole file, which leads to truly insane git diffs. Such is git.
** The Final Result
Everything starts on my local machine. There is a folder in my Documents that contains the repo for the site. This is also set in my nvim
config as a Neorg workspace, so that I can make links between files, if I so wished. When I've written a post, I can build the site
on my local machine, an endeavor that takes a tenth of a second, and commit, with such entertaining messages as "initial" and "changes".
When said commit is pushed to my gitea repo, a webhook is triggered, telling n8n to ssh into server02 and pull the repo into the
www folder in the nginx docker container. Because nginx isn't 50 years old, all files update on the fly, and the new article appears
on the public facing instance.
See? web development is simple.