Browse Source

Initial commit

dev
Yarmo Mackenbach 3 months ago
commit
d3bb84446f
106 changed files with 5024 additions and 0 deletions
  1. +61
    -0
      .drone.yml
  2. +3
    -0
      .gitignore
  3. +19
    -0
      .htaccess
  4. +49
    -0
      9F0048AC0B23301E1F77E994909F6BD6F80F485D.asc
  5. BIN
     
  6. +15
    -0
      composer.json
  7. +1023
    -0
      composer.lock
  8. +87
    -0
      content/blog/2019-12-08--phd-post-mortem.md
  9. +31
    -0
      content/blog/2020-01-05--homelab-overview.md
  10. +92
    -0
      content/blog/2020-02-18--email-dns.md
  11. +35
    -0
      content/blog/2020-03-20--make-it-on-fediverse.md
  12. +19
    -0
      content/blog/2020-04-30--peoples-web.md
  13. +25
    -0
      content/blog/2020-05-02--selfhost-email.md
  14. +37
    -0
      content/blog/2020-05-03--selfhost-email-drawbacks.md
  15. +45
    -0
      content/blog/2020-05-09--traefik-migration.md
  16. +49
    -0
      content/blog/2020-05-10--pihole.md
  17. +96
    -0
      content/blog/2020-05-16--dcvs-proposal.md
  18. +33
    -0
      content/blog/2020-05-17--ai-vs-human.md
  19. +52
    -0
      content/blog/2020-05-27--textbook-eee.md
  20. +78
    -0
      content/blog/2020-06-05--missing-entropy.md
  21. +122
    -0
      content/blog/2020-06-05--website-load-performance.md
  22. +31
    -0
      content/drafts/2020-02-11--coding-blog.md
  23. +13
    -0
      content/drafts/2020-03-05--plaintext-journey.md
  24. +27
    -0
      content/drafts/2020-04-22--taking-back-control-of-digital-life.md
  25. +15
    -0
      content/drafts/2020-05-02--indieauth-pgp.md
  26. +62
    -0
      content/drafts/2020-05-06--web-designed-for-you.md
  27. +36
    -0
      content/foss/foss.yaml
  28. BIN
     
  29. BIN
     
  30. BIN
     
  31. BIN
     
  32. BIN
     
  33. BIN
     
  34. BIN
     
  35. BIN
     
  36. BIN
     
  37. BIN
     
  38. BIN
     
  39. BIN
     
  40. BIN
     
  41. BIN
     
  42. BIN
     
  43. BIN
     
  44. BIN
     
  45. BIN
     
  46. BIN
     
  47. BIN
     
  48. BIN
     
  49. BIN
     
  50. BIN
     
  51. +23
    -0
      content/notes/2020-04-25--100-days-to-offload.md
  52. +19
    -0
      content/notes/2020-04-26--gaming.md
  53. +33
    -0
      content/notes/2020-04-27--pc-build.md
  54. +17
    -0
      content/notes/2020-04-29--missed-a-day.md
  55. +17
    -0
      content/notes/2020-04-29--typography-ellipsis.md
  56. +19
    -0
      content/notes/2020-05-01--icann-rejects-sale-org.md
  57. +18
    -0
      content/notes/2020-05-04--break-from-raid.md
  58. +15
    -0
      content/notes/2020-05-05--varken.md
  59. +17
    -0
      content/notes/2020-05-06--search-engine-indexing.md
  60. +13
    -0
      content/notes/2020-05-07--homelab-crashed.md
  61. +13
    -0
      content/notes/2020-05-08--deletekeybase.md
  62. +31
    -0
      content/notes/2020-05-12--notes-section.md
  63. +26
    -0
      content/notes/2020-05-18--mailvelope.md
  64. +21
    -0
      content/notes/2020-05-19--bf1-revival.md
  65. +13
    -0
      content/notes/2020-05-21--smh.md
  66. +19
    -0
      content/notes/2020-05-22--lunasea.md
  67. +15
    -0
      content/notes/2020-05-23--projects-section.md
  68. +41
    -0
      content/notes/2020-05-25--ending-100-days-to-offload.md
  69. +33
    -0
      content/notes/2020-06-01--invidious.md
  70. +18
    -0
      content/notes/2020-06-08--nuc-fan-cleaning.md
  71. +15
    -0
      content/notes/2020-06-10--avatar.md
  72. +22
    -0
      content/notes/2020-06-11--plausible-start.md
  73. +27
    -0
      content/projects/git4db.md
  74. +21
    -0
      content/projects/smdl.md
  75. +64
    -0
      content/vinyl/vinyl.yaml
  76. +328
    -0
      cv/index.html
  77. BIN
     
  78. +195
    -0
      functions.php
  79. +288
    -0
      index.php
  80. +19
    -0
      scripts/minifyCSS.php
  81. +78
    -0
      static/dank-mono.css
  82. +1
    -0
      static/img/github.svg
  83. +1
    -0
      static/img/gitlab.svg
  84. +1
    -0
      static/img/gnuprivacyguard.svg
  85. +1
    -0
      static/img/mail.svg
  86. +1
    -0
      static/img/mastodon.svg
  87. BIN
     
  88. BIN
     
  89. BIN
     
  90. +1
    -0
      static/img/rss.svg
  91. +1
    -0
      static/img/xmpp.svg
  92. +379
    -0
      static/norm.css
  93. +358
    -0
      static/style.css
  94. +8
    -0
      views/404.pug
  95. +24
    -0
      views/blog.pug
  96. +13
    -0
      views/blog_post.pug
  97. +151
    -0
      views/contact.pug
  98. +41
    -0
      views/foss.pug
  99. +223
    -0
      views/index.pug
  100. +29
    -0
      views/layout.pug

+ 61
- 0
.drone.yml View File

@ -0,0 +1,61 @@
---
kind: pipeline
type: docker
name: deploy dev
steps:
- name: composer
image: composer
commands:
- composer install
- composer run-script minifyCSS
- name: rsync to prism
image: drillster/drone-rsync
settings:
hosts:
from_secret: ssh_host
port:
from_secret: ssh_port
user:
from_secret: ssh_user
key:
from_secret: ssh_key
source: ./
target: ~/web/dev.yarmo.eu
exclude: [ ".git/", ".gitignore", ".drone.yml", "composer.json", "composer.lock" ]
trigger:
branch:
- dev
---
kind: pipeline
type: docker
name: deploy prod
steps:
- name: composer
image: composer
commands:
- composer install
- composer run-script minifyCSS
- name: rsync to prism
image: drillster/drone-rsync
settings:
hosts:
from_secret: ssh_host
port:
from_secret: ssh_port
user:
from_secret: ssh_user
key:
from_secret: ssh_key
source: ./
target: ~/web/yarmo.eu
exclude: [ ".git/", ".gitignore", ".drone.yml", "composer.json", "composer.lock" ]
trigger:
branch:
- master

+ 3
- 0
.gitignore View File

@ -0,0 +1,3 @@
.well-known
vendor
cache

+ 19
- 0
.htaccess View File

@ -0,0 +1,19 @@
RewriteEngine on
Options +FollowSymLinks
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule . index.php [L]
RewriteCond %{HTTP_HOST} ^www.yarmo.eu [NC]
RewriteRule ^(.*)$ https://yarmo.eu/$1 [L,R=301]
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R=301,L]
RewriteCond %{HTTP_HOST} ^blog\.yarmo\.eu
RewriteRule ^(.*)$ https://yarmo.eu/blog [R=301]
RewriteCond %{HTTP_HOST} ^vinyl\.yarmo\.eu
RewriteRule ^(.*)$ https://yarmo.eu/vinyl [R=301]
Redirect 301 /.well-known/openpgpkey/hu/bqx3ddb8nkcmfngfpc4fcq3cwuo3w7hr /.well-known/openpgpkey/hu/9F0048AC0B23301E1F77E994909F6BD6F80F485D.gpg

+ 49
- 0
9F0048AC0B23301E1F77E994909F6BD6F80F485D.asc View File

@ -0,0 +1,49 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBF0mIsIBEADacleiyiV+z6FIunvLWrO6ZETxGNVpqM+WbBQKdW1BVrJBBolg
m8dDOFGDVIRPWNQxhflghKZa2uTudndQVg8tFozRUW0aAzx8Q9na95hW9Ni4p3vI
BdzE7JkyPbySMA0BPgfjXu69rudNMedx98AvUPBye1WdRb09o9JXD2kk86NDWLwh
F0NSRiEB40yjloEDVV+KcZrelWBx5PDXa5sYjJeLENJVDBhg9exrl42jCkKw10UC
azool3R1ULfoid56cRA9WY/uzTY8nymZ+sTaNrj1Ff1cz9KAQwRnsueHM2QbYXcW
uJVr8D+PLbLqpSywMrgL5lbtX6WvUc4R70SUOlIuPviWo9sgvr8NBtBw4FuqKyip
CS9K0xxYXXzZyXuR+seoaxgecTaijaB96lo7Xe4z+O0FSgI24Ec0QLMGzNtEsPHL
Im+dOfaaptRA1zrArMX3gyKmzWDscsTsxJTH2Ggg20SrHT9qXlGMQ6eyzqFb3dSd
FzZhKUw1at5sZoTXDeWkILYpvEEzlxYoAWY3Y9w8lnN3cTfvgAXU48gmY6gytr+h
4ca33rfViLbosbGXXO+X1FSb3kdbG8miDF1fSbzWXzTxOuoOBDEt6P86s0kZWfXX
NZ/Z0+XEQ7xAu3EIeY6DDKUyzr+bCY86tu2nSBQOCfA/JvcY5NT6qPzyZQARAQAB
tCFZYXJtbyBNYWNrZW5iYWNoIDx5YXJtb0B5YXJtby5ldT6JAk4EEwEIADgWIQSf
AEisCyMwHh936ZSQn2vW+A9IXQUCXSYiwgIbAwULCQgHAgYVCgkICwIEFgIDAQIe
AQIXgAAKCRCQn2vW+A9IXQBaD/0TFJ93t/lTNmjfaQo+8oF73MHmFjAsxIyE1anm
wzicfEpfeLZtR4+n8G2HtnXBjNk94HIw19RuoLkAIhXPu2ssYLmCXUTgbeA6Qo1C
L6wCMoXip1yNUMfpw+OgUKIvHRQSjhy920CRM4Lbf4jUQySuMh50tpipLMcif66y
9tTYoOcQYOlSxdBue/8XbhTT0DiC8kgvUznBlbuNnOzfvKeAFX9vu2qgyaBd0u9D
VpaKEfSOKDeW1K989XqB4eepWmb9fPat9JyKTE34c+dTVpIG64+GOMlhTx2RfwW5
K8h2ghUHInjxn+rd/YfuI8K3XKu//eelkDNMxCQ3O91AF5CZOrpue9Vle0rArE/a
/RrkaQQGdquV0fD5iFzbieJnR+Lf421lnMCdk4Br6tmtmAvx5NtaJXc7g5o5Gdfu
GVEX+hKAADQU299vIJYz3tv2Ey5HhyyYV0K9YkpkWHoRV481aZMnkXz7T2Bc5Gyr
saMFPTDJtHvSSzBx1G6Xmg2AoSMY0MoiaP151LtyyjIynOre3XsbL5Stv30bN/vV
pd6rNKo80/jHNa6TPGQkNwwuZn7zns/st6jiW4NjRPzu5NvF6nQJ/gFfjBzGl7o5
Mddz2O/21hIaZUE3JEq8T2iV4/XKYuCk/E0LYOTTMeR00JnJRtK7drk41N7uKv6w
tTPYH7kBjQRdJiLCAQwA3eshHodNGoOhV9y4tCCffpgvh4uvngCRN5ZBqvdgqKfj
TCaGYIzUhnFkzZRFPUERWlVsdHoytYidRe40mBPj7nlvtq/0V/XtbHuuFJWjh/zR
gVvNyfIpqXIEVixFQ8Ee0YXttOvSu7ukCb38kl3Jec6PEfadNRMhDUtL3CuZpBPY
WbRehgWrOAvunwX84J2b/IfNxA5JLUA1apafWH69ecLy0LEtIQk2fYpBEuBrR4ls
WJ1SIb6juVmtkzCYt7rrmZWbJ376J5DtFlTbjgN8oRtJQt/P8g86tSbW2uvYG3ut
cFzGPWx6yXScjpKtz14QmCbaPD8zVv/CGw6debBpqvQKEn7D6B3Pp5xu7b7mQBKs
jQelXh0pcy7VIAc6d7GcF3N/L4uQgVXvp/+04vYMwq88O9hlnltCRSZAvve65kK6
roeHi98t/Pq4n9RoDvubhY5/Vy1XmxBKmtfJjJVcTEM3dZ+a6b3HNwW9dGC4sOZu
T4chjcxh2PMAd1mYJcD1ABEBAAGJAjYEGAEIACAWIQSfAEisCyMwHh936ZSQn2vW
+A9IXQUCXSYiwgIbDAAKCRCQn2vW+A9IXVPSD/4ycKw5vJiUY6JTyj3vhtzhIrtZ
fQpbJ78I8UhALOqmNwcQfW1g/Bz4Qz2ia5lSRDkg+jHqCrkhJXfg+Hjg1G3yj3hV
ModNJT1D4EdkAdPq3CzumkTWbsID70fb8KxvbJ7gI3FplgtuBVeK0Cv+bTuMYKht
wMn2GcTCgyV8Ub7FV30oaKeAjLehbMf2/5lqbRq+l1jQQiGOMKoUdRiEj5Qyt5pK
LWp3XMd8R6gIYmMrs+mPufrxGHXqGk7V740Hp4+HjK9dBcSaRItFCOnfJ/o7NKvQ
ke4g/Eo+tphxt9fYsdUuymVM8zY7qf2eiM3q6uirc1KGylolRy/QB8J0RdXBMEnz
cTQzeRyxnoGsAXsMFNOv3MumHdzW0IF5E8m66rMQB28zdFg1HZHYvV0suS7VRhxB
d9/DQqGV5jGjZS4NyULhVg6pjuhzFq3IxY+fn6KlJOGXCeVvHH96EMqZKibjxNDl
BOfIQqA5t3iMomx1F0p6npxGz1xZLXVfU6cgy2tXwQZd4UXw1l2Wl06nV9J7LjkV
cV4IeuFb8OP9UZDayv8e9NnX2BMp8DpCoK9Z5oeRAbb+mKEOyLBITJtsc6I/0kl/
GjvKql9hjkSSj9HVR0L2IngnpuBgqaGK8r/Z9Mn9Eh+f/x2bnr8rrHFudscKYIPf
T5daUrQE1WR8yJibrA==
=rabO
-----END PGP PUBLIC KEY BLOCK-----

BIN
View File


+ 15
- 0
composer.json View File

@ -0,0 +1,15 @@
{
"require": {
"pug-php/pug": "^3.4",
"altorouter/altorouter": "^2.0",
"pagerange/metaparsedown": "^1.0",
"bhaktaraz/php-rss-generator": "dev-master",
"symfony/yaml": "^5.1",
"tubalmartin/cssmin": "^4.1"
},
"scripts": {
"minifyCSS": [
"php scripts/minifyCSS.php"
]
}
}

+ 1023
- 0
composer.lock
File diff suppressed because it is too large
View File


+ 87
- 0
content/blog/2019-12-08--phd-post-mortem.md View File

@ -0,0 +1,87 @@
---
title: A PhD Post-Mortem
author: Yarmo Mackenbach
slug: phd-post-mortem
date: "2019-12-08 21:02:14"
published: true
---
This is one of those stories that starts with an ending.
As of January 1st 2020, a new challenge awaits for me, a new life. Because my university/academic journey will be completed. In 2010, I set out on a path that would lead me from a biology bachelor degree to a neuroscience master degree and would culminate in a four years PhD program and result in a thesis and doctorate degree to end the journey with a bang. From there, the world would have been my oyster.
That journey was traveled as planned.
Except.
After investing a little over four years in my PhD project, I must end this leg of the trip without achieving its ultimate goal, obtaining the doctorate degree, leaving the past nine years open-ended, unrewarded, uncelebrated.
<!--more-->
This post is not vindictive in nature. In the two weeks since making my decision to end my PhD project, mere weeks before the end of the contract, I have had plenty of time to come to terms with the circumstances, to accept what has happened. At the end of the day, I burdened myself with the responsibility of taking on a PhD project, therefore the eventual outcome of said project, however positive or negative, is the product of my actions and my actions alone. Any setback can be met with positive attitude and forward thinking.
This post is also not a cry for attention. That is not who I am. As a matter of fact, the old pre-PhD me would not have written this post. As hard as I'm trying, I find it difficult to figure what pre-PhD me would have done in this situation nor will I be able to, as I've changed. I have changed in ways I could not have predicted four years ago.
This post is ultimately about opening the discussion on one of the big topics people within higher education do not like to talk about: mental health.
I want to talk about mental health. Though not as severe as with other people I've talked to, I now have first-hand experience in dealing with a sinking ship and have felt the psychological toll that it takes. I can no longer look at fellow graduate students without wondering: are they suffering?
This is my call to action.
## The first years
The project started off pretty slowly, a lot of administrative tasks impeded experimentation. I spent time familiarizing myself with the environment, learned the workings of the existing codebase and other similar tasks. The first major setback was a slow cooker: the protocol describing the activities I was supposed to do needed to be approved by internal committees. This. Took. So. Long. If I had known then that the approval would come after over two and a half full years...
Waiting for approval of my protocol, I spent time learning the experiments, repeating it over and over so that when it would matter, I could eliminate myself as a factor of uncertainty. This process was far less innocent than it seemed. While my protocol was set in a scientific frame and had set goals, the experiments I was performing at first had no higher purpose other than serve as practice. As I grew more comfortable with the experiment and the protocol approval continued to be delayed, I, along with my supervisors, started looking at small results and enjoying minor victories. Sure, I wasn't allowed to yet do what I desperately wanted to, a number of experiments showed an unexpected and promising effect and I spent a few more experiments trying to understand it.
At the time, this felt exciting. I discovered stuff!
That's not what happened. These findings were not sought, they were stumbled upon. And after a couple more experiments, all failing to further our understanding of the observations, the ideas were dropped as fast as a new effect was found and more time was sunk into trying to figure that one out.
And after that observation lead to nothing, there was another one.
And another one.
I failed to see what was happening: I was chasing shiny things. The reason there is a protocol is to keep you focused. It provides a framework in which you work. It asks a question that will be satisfied by any answer, as long as this answer is obtained using the methodology prescribed by the protocol.
The solution would have been simple: write a different protocol with a simpler question. Set the framework. Gather the evidence.
I did not have this wisdom.
## The latter years
I remember the day I received the email approving the protocol. I am not able to describe the feeling of futility that overcame me. There I was, now allowed to do the things I came here to do, but knowing full well there was no longer time to set up the experiments. My project was set to answer a forty year old question. Instead, I was chasing new shiny things, grasping for any finding that could provide meaning to my presence.
For a while, the project seemed on track, I spent almost a year and a half investigating an interesting effect I observed, a shiny new thing more promising than all that came before. Yet, a shiny new thing nonetheless. Luck struck again: the interesting effect turned out to be an experimental artifact, and though I have been able to 100% confirm this, those nine months worth of data were thrown in the bin.
As a little disclaimer: we had been suspicious it could have been an artifact but there was, and still is, no proper way to test this.
Anyhow, in the third year, amidst all of this chaos, the effects of stress and impending doom were starting to take their toll. Fun routines became less fun. Joyful events inspired less joy. I became more isolated, first in the working environment, later in my personal life.
It was during one of the more difficult periods of my PhD project that happened the most happy event: meeting my girlfriend. I felt particularly down in the days leading up to when we bumped into each other, and meeting her had an immediate positive effect on me. It gave me a reason to wake up and do something, it gave me the spirit to keep the fight going, finish the project and claim the reward.
It was only a matter of time before stress caught up again. Experiments failed, stuff got delayed. The question shifted from "what is needed to finish this project" to "is there even a way to finish it at all". The last year was a race against the clock and against the requirements to hand in a thesis. Published articles were needed. More experiments were needed. Did we just throw away nine months of data? Cool, let's replace it with something new. Frustrations between my supervisors and me grew and created more stress. Weekends were spent sitting on the couch and adjusted to avoid all mental activity. Hobbies vanished. I no longer was the same cheerful person to be around anymore, and though I will always admire the strength of my then-girlfriend to put up with what she put up with, that relationship unfortunately also did not outlast the strain the project put on me.
A little over a month before the end of my contract, after four years and two months of work, I had to call it. The project was dead. There were still escape routes to make something out of the project but I had to decline. My body declined. My brain declined.
This was about two weeks ago, and I am only just writing this now as I've spent almost the entire time incapable of thinking. I had to accept what was happening, I had to make peace with the fact that I spent my entire being on this project, it had cost me happiness, laughter and a relationship, and all in all, it was just to quit right before the end.
## Science and me
The scientific world is not for me. Perhaps a different project could have stimulated me in a better way and aspired me to become the scientist I always dreamed of becoming. But somewhere in the last four years, I realized this was not the dream for me. Knowing that obtaining a PhD was no longer vital to my career, I persisted in my efforts to finish the job as it was my way of concluding in a satisfactory manner my nine years of studying biology, neuroscience and how to become a scientist.
## Mental health and me
I wanted to finish the PhD just for me, just for my own satisfaction but in the end, I too was the reason I could not. I was becoming more susceptible to the seasonal illnesses every year, the first few days of holidays were spent working through a persistent headache. I no longer spent time on my hobbies and less and less time behind my piano. Programming, my true passion, had become a chore. And perhaps worst of all, the love for science was gone.
## Final words
So, there it is. Ending in a few more weeks, I am now working to leave the project in a suitable state as well as writing to publish one scientific article. I tried to avoid thinking too much about the future as it would only distract me from my already attention-demanding present, but the time for planning has now come. Leaving a whole lot behind, I have so much to gain.
But I will not simply forget about it all. I can not. The environment I worked in during the last few years did not inspire me to come forward about what was happening in my head. Much like a real-life Instagram, the scientific community celebrates success and disguises, or even denies, failure. Words embellish mistakes and misfortune. Posters filled with colorful graphs hide the hardships and perils experienced by an entire generation of upcoming scientists trying to make it in a world that does not welcome them. It inspires fraudulent behavior. The stories I've heard, from PhDs to PIs, from students to technicians.
Come forward. Talk, and you will find people to listen. I know there is an entire population out there of people suffering through their academic career. You are not alone. Let's discuss this. I would like to talk to you.
\#mentalhealth
The fediverse is a social network promoting free speech and provides a safe environment to find people in similar situations and have meaningful conversations. You'll find me there, [@yarmo@fosstodon.org](https://fosstodon.org/@yarmo). Let's talk.

+ 31
- 0
content/blog/2020-01-05--homelab-overview.md View File

@ -0,0 +1,31 @@
---
title: "IMPUC #1 &middot; Homelab overview"
author: Yarmo Mackenbach
slug: homelab-overview
date: "2020-01-05 09:32:21"
published: true
---
"In My Particular Use Case" (or IMPUC) is a series of short posts describing how I setup my personal homelab, what worked, what failed and what techniques I eventually was able to transfer to an academic setting for my PhD work.
<!--more-->
## Why a homelab?
I started my homelab about a year after I started my PhD. My academic work was challenging in a technical way, with new data generated every day, managing raw data, processed data, metadata. I built a number of tools that would aid me on my daily basis for my work but I needed a place to just try out every technology I could possibly need for my job. It eventually turned out that the homelab was destined to do far greater things than simply serve as a testbed but that's how it started and what provided me the knowledge and experience to solve important issues in my academic work.
## The central server
So one day, I ordered myself an Intel nuc with a 5th generation i3 processor, 8 GB of RAM, an m.2 drive and got started. Container solutions caught my attention before I even had the machine so I first installed docker and later, docker-compose. This setup hasn't changed a bit today as it still allows me to launch new services very easily by changing a single yaml file with minimal impact on the hosting machine. The first things I would install were several databases and gitea, a self hosted git service. The services sit behind a reverse proxy (traefik) to allow them to be accessed by using (sub)domains. Configuration of the machine is managed by a folder of dotfiles backed up in a git repo and `stow`ed as necessary, but I am currently looking into ansible for this purpose. A 4-bay JBOD USB3 device provides the storage that the nuc then also (partly) makes available over the local network via smb.
## The peripheral Pi's
Floating around the central server are several raspberry pi's. Back when I first started, the central server would sometimes crash or soft-lock and since my entire system monitoring system (telegraf+influxdb+grafana) was also installed on there, there was not a whole lot of investigating and fixing I could immediately do. Now, the central server and the pi's all run telegraf and a single pi now hosts the influxdb+grafana stack and only that. Another pi is acting as a media center (Kodi) and finally, two redundant pi's function as DNS forwarders (Pi-Hole), one of which also hosts my VPN solution (wireguard).
## The out-of-house computing
I have two permanent VPS's running: a website server (Cloudways) and a mail server (mailcow). Both could be hosted on the central server but as long as I cannot guarantee a perfectly stable Internet connection (which my house does not have) nor stable computing (personal budget issue), I choose to host these outside of the house.
## Final words
Thanks for reading this, more posts will come soon explaining with more depth some of the elements described above. If you have questions, you can find several ways to contact on [yarmo.eu](https://yarmo.eu).

+ 92
- 0
content/blog/2020-02-18--email-dns.md View File

@ -0,0 +1,92 @@
---
title: "Selfhosted email: DNS records"
author: Yarmo Mackenbach
slug: email-dns
date: "2020-02-18 18:39:54"
published: true
---
When selfhosting email, an essential element to get right are the DNS records. Some are absolutely mandatory for email to work, some build trust and some just make life easier. Here's an overview of how I set up DNS for my personal mail server.
<!--more-->
## My setup
I have a VPS server running [mailcow](https://mailcow.email/) and two domains: one linking to the mail server, admin page and the web client (let's call this `mail.server.domain`), and the other just being a email domain used in the email address (let's call this `public.domain`, so the email address would be `hi@public.domain`). This way, even if you know the email domain, you don't directly know server domain. Granted, one could find this out by looking at a few DNS records.
The benefit is that I can host email for multiple domains, as long as they all point to `mail.server.domain` by using the correct DNS records.
Please note that using a single domain is just as easy, as `mail.server.domain` and `public.domain` will simply be the same. Another scenario for which you could use the 2-domain setup is when you want your email address to be `hi@public.domain` (without subdomain) but wish to put the mail server (and/or web client) on `mail.public.domain` (with subdomain).
I use [DigitalOcean](https://www.digitalocean.com/docs/networking/dns/) as my VPS and DNS server.
## Mandatory DNS records for server.domain
### A records
A records link domain names to IP addresses. When you want to use the admin page or web client provided by your email selfhosting software, the browser needs to know the IP address of the server/VPS and that is what the A record is used for. A records are not used by mail servers when sending or receiving emails.
```
TYPE HOSTNAME VALUE TTL
A mail.server.domain 1.2.3.4 3600
```
## Mandatory DNS records for public.domain
### MX records
MX records tell other mail servers where to actually send the emails. In my case, my email address is `hi@public.domain` but my mail server is located at `mail.server.domain`. Other mail servers look at the address, see `public.domain` and will assume this is our mail server. We use MX records to direct the emails to `mail.server.domain` instead.
```
TYPE HOSTNAME VALUE PRIORITY TTL
MX public.domain mail.server.domain 1 14400
```
## Optional DNS records for public.domain
A and MX records is all you need to get a functional email address. However, for ease of use and good reputation/trust, a few additional DNS records are recommended.
### SRV records (ease of use)
SRV records are used to link specific protocols to specific domains and ports. Just like how MX records tell other mail servers to direct their mails to your mail server on a different domain, the same must be done for mail clients. Say you want to use Thunderbird (or any other mail client) to access your emails. You will log in with your address (`hi@public.domain`) and password in Thunderbird, and it will then assume your mail server must be located at `public.domain`. It will not find it there, warn you about this and you will have to manually enter your IMAP and SMTP server details. If you have set up SRV records, Thunderbird will automatically detect the correct server location (`mail.server.domain`) and save you some hassle.
```
TYPE HOSTNAME VALUE PORT PRIORITY WEIGHT TTL
SRV _imap._tcp mail.server.domain 143 1 100 14400
SRV _imaps._tcp mail.server.domain 993 1 100 14400
SRV _pop3._tcp mail.server.domain 110 1 100 14400
SRV _pop3s._tcp mail.server.domain 995 1 100 14400
SRV _submission._tcp mail.server.domain 587 1 100 14400
SRV _smtps._tcp mail.server.domain 465 1 100 14400
SRV _sieve._tcp mail.server.domain 4190 1 100 14400
SRV _autodiscover._tcp mail.server.domain 443 1 100 14400
SRV _carddavs._tcp mail.server.domain 443 1 100 14400
SRV _caldavs._tcp mail.server.domain 443 1 100 14400
```
### TXT records (good reputation)
TXT records are simply messages that provide additional information. Here, TXT records are used to tell other mail servers more about your own mail server in order to build some trust between them: these records are a useful tool against spoofing where bad actors try to impersonate you and pretend you are sending the bad emails they are sending.
```
TYPE HOSTNAME VALUE
TXT @ "v=spf1 mx ~all"
TXT dkim._domainkey "v=DKIM1;k=rsa;t=s;s=email;p=..."
TXT _dmarc "v=DMARC1;p=reject;rua=mailto:admin@public.domain"
```
Detailed information on these records can be found [here](https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/), but in short:
- SPF records tells other mail servers that only the specified mail server (in this case, `mx` which points to `mail.server.domain` via the MX record) is allowed to send emails for your email domain (in this case, `public.domain`);
- DKIM applies a virtual signature to all your sent emails and other mail servers use the second TXT record above to validate that signature;
- DMARC records tell other mail servers what should happen to emails that fail the SPF and DKIM policies; the record above states these emails should be rejected and a notification sent to `admin@public.domain`.
The DKIM record contains a cryptographic key (replaced above by `...`). In my case, this key was generated for me by mailcow and is unique for each email domain.
Please note that having the above TXT records does not guarantee that other servers trust you immediately: your emails are still likely to end up in spam folders at first. Using intermediaries like [mailgun](https://www.mailgun.com/) can help with avoiding the spam folder. More on that in a later blog post.
## References
Mailcow has their own [recommended DNS records guide](https://mailcow.github.io/mailcow-dockerized-docs/prerequisite-dns/) which, in conjunction with their admin page, should make setting up DNS records a breeze.
[This guide](https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/) has a lot of in-depth information about the SPF, DKIM and DMARC records (TXT records above).

+ 35
- 0
content/blog/2020-03-20--make-it-on-fediverse.md View File

@ -0,0 +1,35 @@
---
title: "So you want to make it on the fediverse?"
author: Yarmo Mackenbach
slug: make-it-on-fediverse
date: "2020-03-20 13:54:22"
published: true
---
That's the plan, right? A whole new world awaits you on the fediverse, and you are going to make it there! There's something you should know.
<!--more-->
## Welcome to the Fediverse
Whenever I talk to people in my surroundings about the fediverse and try to convince them to use it, I have a go-to story that I tell. On multiple occasions in the past, there have been real-life events that sparked controversy on Twitter which then responded by banning some public figure or doing something else to upset a portion of the population. In turn, this would lead to either that public figure or a social movement to incite people to leave Twitter and join a safer and more open alternative: the fediverse. As citizens of the fediverse, we would notice a massive influx of new users and a stream of introductory messages on our timelines. Which, in my humble opinion, is always a welcome sight. An interesting observation is that the first message by these new citizens is often a little&#8230; "off" if you are used to being on the fediverse.
Allow me to elaborate. The introductory message of Twitter exiles often reads as follows: "Hello all! I am [insert name here], I am going to post messages about [insert multiple topics here]. Who are the people I should follow?".
The message above oozes "Twitter Mentality". To explain what I mean, let me use an analogy.
## The analogy
Twitter is a metropole. Its users all share the same playing field. If you want to participate, you don't talk, you shout. How else are you going to be heard in a crowd of millions? Once you start shouting, quiet people start to listen. This creates a one-to-many dynamic.
The fediverse is a network of well-connected villages. As part of a village, you get to know people. You talk to people because it's less crowded, there's less competition. It's still a network so you can connect to people multiple villages away with the same ease. But the noise is filtered. You are surrounded by people who share a common interest which is the reason you decided to live in that specific village in the first place, but you still get the network effect and communicate with people outside of the village because you want to, because you can. This creates a one-to-one or on a larger scale, many-to-many dynamic.
## What makes a network social?
My argument is that a one-to-many network is not a social network, it's broadcasting. Having "influencers" on a network is, if anything, anti-social. This is where I need to go back to that initial message by the Twitter exile. There's no need to announce you are going to talk about a certain topic, just do it. I will not follow you because you are announcing to post messages about a certain topic. Post a message, let's debate and talk about it and if I am truly interested in your opinion and reasonings, only then will I follow you. And don't follow people because many others do; follow them because you want to, because you get the choice.
This is my opinion and I have talked to many sharing the same sentiment. This many-to-many dynamic is what makes the fediverse appealing. You get a Home timeline where you will find posts from people you follow and actually want to hear from. You get a Local Timeline filled with posts from people you don't always know, but it's the same instance/village, so you already share a common ground. And for when you feel like exploring, you get a Known Network timeline filled with all sorts of posts.
You may argue that building this whole narrative around a simple introductory post is flimsy, and I agree. Not everyone's first post is like that, and even if it were, there are different ways of interpreting the content. But it's something I found people can relate to, it's a bridge between two mentalities, two social structures that I can use to introduce the many advantages of the fediverse.
If you would like actually useful information on starting with the fediverse with Mastodon, please have a look at this [blog post by KevQ](https://kevq.uk/how-does-mastodon-work/).

+ 19
- 0
content/blog/2020-04-30--peoples-web.md View File

@ -0,0 +1,19 @@
---
title: "The People's Web"
author: Yarmo Mackenbach
slug: peoples-web
date: "2020-04-30 08:53:37"
published: true
---
`#100DaysToOffload >> 2020-04-30 >> 006/100`
The day has long past that we should have started worrying about the openness of our web. It was only a matter of time before censorship and individual tracking would seep into the web of the western world. It pained me to see the levels of state interference in foreign countries, but at least, to me, that was something I only read about in the news, it was then unimaginable that this would happen in the short term in the "free world".
Now, in these trying times, states and big corporations may see it more than justified to track individuals and adjust the information we receive. I do not question their motives: "we are in this together". It takes the world to fight this pandemic, and the next one, and the one after that.
But the question on a lot of minds is: "how do we come back from this?" The hardest part is for the people to accept they are being tracked and their information filtered, so big corporations did it with shady and hidden tactics to avoid this confrontation. We are past this: YouTube has announced it will ban all content that does not conform to WHO and countries everywhere are building apps to track our health and social interactions. Again, this could prove to be what humanity needs right now, but what about after? Is there even an after?
Fortunately, we all have the power to make a few changes to improve our online well-being: change social networks, don't rely on corporations, self-host as much as you can. I will be dedicating a large number of posts on this topic. Self-hosting is not hard, it just takes a little effort to get started.
As a friendly reminder, this website (blog included) has no tracking whatsoever: I do not care who you are, where you are from or with how many you are reading this post.

+ 25
- 0
content/blog/2020-05-02--selfhost-email.md View File

@ -0,0 +1,25 @@
---
title: "Selfhost email"
author: Yarmo Mackenbach
slug: selfhost-email
date: "2020-05-02 16:20:35"
published: true
---
`#100DaysToOffload >> 2020-05-02 >> 008/100`
Yes, you can selfhost email. And you should, if and ONLY if you feel comfortable with maintaining a linux server. I'm not a pro at all, but I've been doing it for almost two years, I know where to find my logs, I know how to find the correct answers on stackoverflow and troubleshoot a less-than-functional system.
So don't start with this, but eventually, soon enough, you can selfhost your email.
Because email is important to me, I have chosen to not host it at home, any network issue could prevent emails from coming in. Granted, the sending server will usually retry for 24 hours until the email is actually received on your side so small errors are forgiven, but still, I've opted for a dedicated droplet on digitalocean, though any VPS will do.
And then, follow the instructions on [mailcow.email](https://mailcow.email/) and you're set. SSH in once a week on your VPS to run the updater. The administration side has plenty of features for advanced administration of the email server and the included webclient is the awesome SOGo.
If you want to make sure your email server is as trusted by other servers as possible, your emails are sent as securely as possible and your experience with other email clients is as smooth as possible, please check out my [post on email server DNS settings](https://yarmo.eu/blog/email-dns).
With all that being said, I still use a protonmail address for critical websites and services like governmental services and banking, because whatever happens, I need to make sure that I really receive these emails. On my selfhosted email server, I use two domains: one which I share with the world and with websites for logins and one that I keep private and only use for direct communication with other people. I have yet to experience a single minute of outage, credits to digitalocean and the people behind mailcow.
---UPDATE---
After a [fair comment](https://fosstodon.org/@Matter/104099349377193869) on the fediverse, I have written a [follow-up post](/blog/selfhost-email-drawbacks) to address a few more critical points like server reputation and how I deal with that.

+ 37
- 0
content/blog/2020-05-03--selfhost-email-drawbacks.md View File

@ -0,0 +1,37 @@
---
title: "Selfhost email… But should you?"
author: Yarmo Mackenbach
slug: selfhost-email-drawbacks
date: "2020-05-03 19:46:47"
published: true
---
`#100DaysToOffload >> 2020-05-03 >> 009/100`
Yesterday, I wrote about [how you **can** selfhost your very own email server](/blog/selfhost-email). Shortly after publishing the post, it was pointed out to me that are very reasonable drawbacks to doing this. So today, let me give you my answer to the question of whether you **should** selfhost your email.
## Relying on hardware
Firstly, I mentioned in that article that although I have two domains on my selfhosted server, I still fall back to a protonmail address for the most important stuff like banking and governmental services. So what's the point of selfhosting then, if I do use third-party email addresses? Well, what I failed to mention was that in the long term, yes, I do want everything selfhosted.
When I started my email hosting adventure, I was very cautious. Only months before did I start my own homelab and as it turned out, that had the tendency to crash every so often, making it a no-go for email hosting. I resorted to use a VPS while getting my homelab sorted out. This worked great and still does, but at the time, you can imagine I was still discovering the DNS parameters, the reputation handling (more on this later). Also, what were the consequences of running a VPS for 24/7? I could not commit to using the selfhosted email for anything more than experimentation.
Fastforward about a year and the VPS has held up greatly, the email software has never crashed or acted against my expectations. It has received regular updates and never failed once during one. Meanwhile, my homelab has proven to be extremely reliable and with the upcoming hardware upgrade, I expect even less irregularities than I do now. Soon enough, when all the stars align and I figure out how to make recoveries as fast on my local hardware as I can on a VPS (make a new instance based on a daily snapshot and voilà!), my email server will be transferred to my homelab and I will use it for everything.
## The pain of administering an email server
Secondly, I've also heard of people not resorting to selfhost their email because of the fragility of the underlying processes and if one thing is slightly out of tune, the whole email server stops working. Although I've toyed around with most of the individual processes like dovecot at the beginning to understand what they do and how they work, I haven't touched a single one of them in almost a year. [mailcow.email](https://mailcow.email/) is just that good. I've played around with the settings and it won't stop working. Meanwhile, I get an antivirus, spam monitoring, those handy "+topic" email filtering. I'd like to try out [Mail-in-a-Box](https://mailinabox.email/), mostly because it is also recommended by [PrivacyTools](https://www.privacytools.io/providers/email/#selfhosting) but I have no incentive too. My current solution just works great for me.
## The reputation of a server
Lastly, I need to address a IMO bigger problem: reputation. If other servers don't trust you, your emails may easily be thrown into the spam folder of the recipient or even rejected. The main reason for this is to fight spam: mass email spammers usually operate from unknown IP addresses. Unfortunately, this hurts the selfhosters. So, before you have even installed your email server software, you are already mistrusted by simply not using the big servers like Gmail and Hotmail. And indeed, when I started, most of my emails landed in spam.
This got greatly improved simply by using an email relay; in my case, [mailgun](https://www.mailgun.com/). These paid-for (but often with free tier) services are a lot more trusted since mailgun will do spam prevention on their end, so letting them send your emails for you is a great improvement. And even with the [recently reduced free tier](https://news.ycombinator.com/item?id=22192543), I don't send nearly enough emails to come close to the free tier quota.
However, it still happens that my emails are treated as spam so I often do follow ups via other channels of communication. Another issue may be that the IP address you were given already has a bad reputation caused by a previous owner: this is difficult to find out and even harder to fix. DDG-ing `improve email server reputation` yields many articles but read a handful of them and soon you'll realise it's really really hard to improve it. There's no central repository, no forms. Getting a mistrusted IP address can quickly suck all the fun out of having your own email server.
## Answering the question
So, **should you**? This depends on how willing you are to be independent of third-party email services and how much you are willing to put up with. I started naïvely and had to answer this question along the way while experimenting. By now, my personal answer to this question is: yes. I see the benefits and drawbacks. I'm not sure if it's the usage of mailgun, or me sending mails to family and friends and then asking them to tell their services to not mark it as spam, but most of my emails are properly received nowadays. Also, I have managed to improve my infrastructure, I can rely on the hardware (and soon on the emergency recovery mechanisms) and will soon migrate my email server so it's nicely at home.
Hosting your own email server is not easy and requires your full dedication. And with many upcoming [trusted and privacy friendly email services](https://www.privacytools.io/providers/email/), it may not always be the right tool for the job.

+ 45
- 0
content/blog/2020-05-09--traefik-migration.md View File

@ -0,0 +1,45 @@
---
title: "Traefik migrated to v2"
author: Yarmo Mackenbach
slug: traefik-migration
date: "2020-05-09 23:01:34"
published: true
---
`#100DaysToOffload >> 2020-05-09 >> 015/100`
Last september, [traefik](https://containo.us/traefik/) received its [big version 2 update](https://containo.us/blog/traefik-2-0-6531ec5196c2/). I was very excited about TCP routers and the newly implemented middlewares. Can't have been more than a few days later that I tried to migrate my homelab to the new version. I remember being annoyed by the lack of a proper migration guide. Sure, it's possible that I didn't look good enough, but I searched for it for a few days without results. I tried using the new documentation and failed, everything crashed and could not get it working. As I did not have the time to do much more extensive research and also, I needed the selfhosted services on a daily basis, so I left it.
Until today. The whole migration took me a little over three hours and I learned quite a bit on the way. Also, the [migration guide](https://docs.traefik.io/migration/v1-to-v2/) has helped quite a bit. If this was updated since September, great article. If not, still a great article and I really did not take the appropriate amount of time to prepare my migration.
## Easy steps
The first thing I did was a general search-and-replace for the docker labels (both routers and services). What was `traefik.frontend.rule=Host:xyz` now is `traefik.http.routers.router0.rule=Host(``xyz``)`. What was `traefik.port=80` now is `traefik.http.services.service0.loadbalancer.server.port=80`. Quite a bit longer and more cumbersome, but in the end, more extensible.
The `traefik.docker.network=xyz` is now unnecessary in most cases as you can define a default network in the `traefik.toml` file. Speaking of which, you can now work with a YAML file. It's not to everyone's taste, but I will switch to it in the future when I have some more time.
The `traefik.toml` still needed quite the makeover, but everything is will explained in the [migration guide](https://docs.traefik.io/migration/v1-to-v2/) and the [reference page](https://docs.traefik.io/reference/static-configuration/file/). Content-wise, I changed little, it's just that the syntax is different. Notable changes are the domains for which certificates are needed now are declared in the `entrypoints` section instead of the `acme` section and the `file` section can no longer include the "router/service declarations", they belong in a separate file.
## Pitfalls
Doing this resulted in a non-functional state. Two different possibilities: either the container could not be redirected to the correct service resulting in a 404, or the redirect was correct but without correct certificate. That first possibility was mostly on me: containers are now no longer exposed by default and I forgot to add `traefik.enable=true`. Mind you, I always set `traefik.enable=false` when I didn't want a container exposed and still do.
However this did not solve the issue for all the containers. I suspect there's still some trickery I need to do in case of using multiple routers. I tried explicitly specifying the `service` for the different routers but that wasn't the solution.
As for the other issue, the solution was simple but finding the source was quite hard: as it turns out, I renamed the `certificateResolver` to something other than `default`. If such is the case, then containers will NOT automatically use it for their certificates. Adding `traefik.http.routers.router0.tls=true` and `traefik.http.routers.router0.tls.certresolver=mycertresolver` to each container solves this issue.
## Todo
One thing I haven't got working yet is using the `providers.file` provider. I tried to mimic the container labels but to no avail. Yet.
---
**Update 2020-05-12**: I fixed the `providers.file` issue. Remember kids, always read the documentation well. It turns out, I missed the line that starts with `*` in the code below.
```
[http.services]
[http.services.Service01]
[http.services.Service01.loadBalancer]
* [[http.services.Service01.loadBalancer.servers]]
url = "foobar"
```

+ 49
- 0
content/blog/2020-05-10--pihole.md View File

@ -0,0 +1,49 @@
---
title: "Introduction to PiHole"
author: Yarmo Mackenbach
slug: pihole
date: "2020-05-10 23:24:04"
published: true
---
`#100DaysToOffload >> 2020-05-10 >> 016/100`
[PiHole](https://pi-hole.net/) is almost ubiquitously present on every list of services people could/should selfhost. And rightfully so, it is easy to set up and extremely useful on a daily basis. It blocks ads on almost all websites on all the devices in your home without the necessity of installing anything on them. It will also stop some devices from communicating with their parent companies behind your back.
## How it works
To understand how PiHole does its thing, we need a quick introduction into how DNS works, the system that makes sure we can visit websites even if they are located on the other side of the world. The problem DNS solves is that the URL you use to visit a website doesn't tell your device anything about the physical location or IP address of the server that hosts the website.
If you wish to visit a website, say [yarmo.eu](https://yarmo.eu), you enter that address in the top bar and hit enter. Your browser will then ask your router to get this website for you. If this is the first time you visit this website, your router doesn't know yet where the server is located, so it asks a DNS server in geographical proximity, usually the DNS server of your ISP.
If this DNS server knows the IP address of the server, it will be relayed back to your device which will now ask that server directly for the content of the website. If the DNS server doesn't have this information, it will ask another and so forth until the IP address of the host server is found.
As we established above, your router contains a DNS router. However, this can almost always be delegated to another DNS server in your home. That's where PiHole comes in. Instead of your router trying to figure out where the website server is located, it will ask PiHole to do so.
But PiHole has a trick up its sleeve: it has a built-in database of hundreds of thousands of URLs that are associated with ads and when they are requested, PiHole simply ignores them.
So you want to visit `coolsite.com`? Fine, PiHole will get you that website. Now, `coolsite.com` suddenly wants to load an ad from `ads.gafam.com`? The computer asks the router, the router asks PiHole, PiHole knows this URL is used to serve ads and will block that request, giving you a website without ads. Awesome!
## Something you want to say?
Meanwhile, you are listening to music using a wireless speaker in your living room from a made-up brand "NOSON". What you don't know is that this device is continuously sending messages to the company containing information about the music you play and more. PiHole knows this and as soon as the speaker requests to send a message to `metrics.noson.com`, PiHole says no.
That's how PiHole blocks ads AND protects your privacy.
## Dedicated hardware
Dedicating hardware to PiHole is advised but the hardware can be as simple as a [Raspberry Pi Zero](https://www.raspberrypi.org/products/raspberry-pi-zero/). The reason it is advised to use dedicated hardware is because if your PiHole crashes, there's no more internet in the home until you get the PiHole working again. It way to prevent this situation from happening is to always have two PiHoles running on separate hardware and telling the router about both PiHoles.
## Second DNS server?
Oh, and while we're on the subject: do not put any "fallback" DNS servers like Google's or Cloudflare's in the second DNS server field on your router. Unfortunately, it doesn't work like a fallback, all routers will simply divide the workload over the two DNS servers. This means that if an outside DNS server is put in second place, it will receive DNS calls even if the PiHole is fully functional.
Having a proper DNS fallback server is difficult to set up, so best would be to use two different PiHole instances. Unless, of course, you don't mind a small period of internet loss and you are always nearby to fix the situation.
## Caveats
Unfortunately, ads on video platforms like YouTube will not be blocked. This is because they serve the ads on the same domains as they serve the main content, meaning that they don't have a `ads.youtube.com` or something similar. Therefore, PiHole cannot block the ads. As there are a few of these edge cases, it is always recommended to use PiHole in conjunction with on-device ad blockers like [uBlock Origin](https://getublock.com/).
## Final words
Really, there are few reasons to not get PiHole into your home and the benefits vastly outweigh the challenges (IMHO). It is also a great start on a journey of selfhosting more services and realising that one can be independent of major corporations to some degree.

+ 96
- 0
content/blog/2020-05-16--dcvs-proposal.md View File

@ -0,0 +1,96 @@
---
title: "Proposal for a Distributed Content Verification System"
author: Yarmo Mackenbach
slug: dcvs-proposal
date: "2020-05-16 14:49:39"
published: true
---
`#100DaysToOffload >> 2020-05-16 >> 018/100`
## Preamble
This is going to be a long post. In it, I will describe a system that I have been thinking of for the last week. The way I see it, there are three possible outcomes: a) it's genius, I've outdone myself and I should build it; b) it's genius but other people have already solved this issue (perhaps in a different way); c) it's a mediocre/inadequate solution to a problem that doesn't need solving. I need help in figuring out which description suits this idea the best. Let me know on the [fediverse](https://fosstodon.org/@yarmo).
## Background
### Story 1 - Linux Mint hack
Two short stories are required. The first is based around Linux Mint and [what happened in 2016](https://blog.linuxmint.com/?p=2994). TLDR from the blog post: "Hackers made a modified Linux Mint ISO, with a backdoor in it, and managed to hack our website to point to it". In addition to just linking to the modified ISO file, they also changed the MD5 hash to match their modified version.
The web is fragile. If you post MD5 hashes on your website so people can trust your software and your website gets hacked and the hashes changed, there's no trust. This is not Linux Mint's fault, this is the way the internet works. I have had hackers on my shared hosting servers who uploaded a whole bunch of suspicious files. Because of this, the fix was easy. But what if they just made a minor change in a single file? I would have been none the wiser.
### Story 2 - Keybase
You need an external source of truth and this is what the second short story is about: Keybase. I verified my website and my accounts on various services through their website. If you know me through my fosstodon.org account, you could check if that Keybase account was really mine, and if so, you could verify that this website is really mine as well as some other accounts. A nifty solution for authenticity proof of my distributed online presence.
But there are drawbacks. The actual content on my website is not verified. The system is centralised and also not FOSS. Lastly, due to their recent acquiring, I will no longer be using Keybase.
So my new authenticity proof? My website. The links on my website are who I am on various online services. I curated those links. I checked for each one if they link to what I intended them to link to.
But that's not enough. What if my website gets hacked? And a social link gets replaced? "Well, that doesn't happen to me", some might say. Fine, let's look at a second example. Visit someone else's personal site and click their social links. How do you know if you can trust those links? "So what", you say? Let's go further. You want to donate to someone using a cryptocurrency. They have their wallet on their website. Is that really their wallet though?
I think we can solve this issue the way we would want to: using a distributed system.
## Proposal for a Distributed Content Verification System
### Overview
The concept is based around a network of two different types of nodes: the "content" nodes and the "truth" nodes. The "content" nodes are websites with content that need to be verified. The "truth" nodes are servers that periodically check all known pages for changes.
The idea is that a hacker needs to obtain a developer's cryptographic keypair and infiltrate both a "content" node and one or more "truth" nodes in order to get away with their malicious activity.
### Step 1 - Linking a "content" node to a "truth" node
First, the website owner needs to make their website ("content" node) known to the network of "truth" nodes. This can either be done manually by asking someone they trust and owns a "truth" node, or "truth" nodes could implement some sort of registration form.
A valid contact method like an email address is mandatory to communicate irregularities to the website owner. A public cryptographic key is also required in order to check the signature of the hashes (see below).
### Step 2 - Updating the "truth"
During the process of uploading the updated content of their website to their server, the website owner also sends the hashes of the updated files to the "truth" node they registered with. This could be done by using a command-line tool on the server, on the developer's machine, it could even be part of the CI/CD/CD pipeline. Hashes are signed using a cryptographic keypair to make sure the website owner is the one who updated the content.
Alternatively, the "truth" node could have a web interface with a button to trigger the (near-)immediate download of the pages and computation of the hashes. However, this method does not easily allow for the cryptographic signing of the hashes and should therefore not be advised or even accepted.
Once updated, the "truth" nodes exchange the updated hashes with each other.
One thing to consider: websites can be dynamic, for example by including posts from social networks. HTML tags that contain dynamically generated content should get a specific tag so they get excluded from the hashing.
### Step 3 - Verification of content
On a regular basis, the "truth" nodes download the pages and compute the hashes. If they match with the hashes in their database, all is well.
If a discrepancy is found, a "truth" node should ask other "truth" nodes if updates exist for this particular website and the updated hashes simply haven't propagated yet. If so, fetch the new hashes and run this step again.
If no updated hashes are found or the new hashes still don't match, contact the website owner and let them know something has changed on their website that they haven't told the "truth" nodes about.
### Optional step 4 - User benefits
In addition to the measures taken in step 3 when detecting anomalies, browser plugins could warn visitors of websites that the content they are seeing may not be what the website owner intended it to be.
### Possible attack surfaces
If a "truth" node is hacked, hashes could easily be changed. However, signing the hashes using a cryptographic keypair should mitigate this problem. Other nodes will not trust the newly propagated hashes and will flag that "truth" node as corrupted.
If a "truth" node is hacked and the website owner's credentials are changed, they would no longer receives notifications. Credentials should also be signed by the cryptographic keypair to make changes like these detectable.
If a "truth" node is hacked and the stored public key is modified, we have a problem. "Truth" nodes should verify each other as well to make sure no funny business like this happens.
If a "truth" node is hacked and the "content verification" code is changed, we have a problem. Again, some form of collaboration between "truth" nodes should prevent hacked "truth" nodes from doing harm to the system.
If a "content" node is hacked and new files are uploaded, the "truth" node will not be triggered as it won't handle these files. But at least, the content displayed to visitors remains unchanged.
If a "content" node is hacked and existing files are modified, the "truth" nodes will be triggered and there's no code on the "content" node that could prevent this from happening.
If a "content" node is hacked and existing files are modified in such a way that the hashes match, we have a problem. Proper research needs to be done to correctly implement cyptographic hashing functions to avoid this issue.
### Things that need to be worked out
- How exactly does a new website enter the network?
- How to coordinate page downloading and hash computation to avoid redundancy and load on the hosting server?
- How to measure credibility among "truth" nodes and detect corruption of individual nodes?
- How to prevent hash collision?
### Federated AND peer-to-peer
The concept described above is technically based on federation. However, I initially imagined several websites hosting both their own websites and the hashes of websites they selected. This is still possible: the concept described above should support both a federated content verification system and a peer-to-peer content verification system.

+ 33
- 0
content/blog/2020-05-17--ai-vs-human.md View File

@ -0,0 +1,33 @@
---
title: "Why we won't have artificial intelligence rivaling human intelligence"
author: Yarmo Mackenbach
slug: ai-vs-human
date: "2020-05-17 00:27:27"
published: true
---
`#100DaysToOffload >> 2020-05-17 >> 019/100`
Someone asked why people are working on artificial intelligence "which would infinitely surpass human capabilities?" Here's my answer.
That is not going to happen. We will never be able to produce machine learning surpassing human intelligence (though I would have personally looked forward to it).
Consider this: neurons have refractory periods of 1-4 ms. During the refractory period, a neuron cannot give another signal. Thus, their firing rate cannot exceed at absolute best 1000 signals or "spikes" per second. That's 1 kHz. At best. Your average neuron is a lot slower. Modern day processors easily exceed that speed by a factor 1000. So why don't we have a cyborg Einstein yet?
That has everything to do with makes us "intelligent". We have the same neurons as primates. Heck, we have the same neurons as worms. Why aren't people afraid the worms might kill us all soon?
Intelligence does not stem from the number of neurons or how fast they are. It all has to do with how they are connected.
Humans have extremely well-developed cortices. The reason cortices have folds is to increase the surface area, just like a radiator has folds. Sure, you gain a few neurons, but more importantly, you gain a whole lot of connections.
So, "researchers just need to make CPUs with more connections to increase their capability", you might say.
Well, you still haven't considered the single most important reason artificial intelligence will always be inferior to any animal intelligence.
CPU's are made of transistors. A transistor is a switch that goes on or off.
Brains are made of neurons. A neuron has an immense range of output, from slow firing to fast firing (up to 1 kHz). That is a whole lot more nuanced that on/off. It has also extremely well-calibrated inputs, it can receive multiple excitatory inputs that increase neuronal activity, it can also receive inhibitory inputs that decrease activity. It can do addition and substraction by placing the inputs at different locations on the dendrites (what neurons use to capture inputs). Each neuron can singlehandedly do what a whole CPU is designed to do.
And there we have it. To equal a brain with millions of neurons, you can't use a CPU with millions of transistors. You'd need a computer with millions of CPUs.
You, my friend, are safe.

+ 52
- 0
content/blog/2020-05-27--textbook-eee.md View File

@ -0,0 +1,52 @@
---
title: "How does a textbook 'Embrace, Extend, Extinguish' operation work?"
author: Yarmo Mackenbach
slug: textbook-eee
date: "2020-05-27 13:22:21"
published: true
---
I recently found out about what happened to the [AppGet](https://appget.net/) tool for Windows made by [Keivan Beigi](https://keivan.io).
Sadly, a [recent blog post](https://keivan.io/the-day-appget-died/) is outlining the details around the decision to cease development and shut down the service which provided an open source package manager to Windows.
Stories about open source services shutting down are always sad and a blow to the community, but this one in particular is noteworthy. Judging from the events as written down by Keivan in his [post](https://keivan.io/the-day-appget-died/), he has been the target of an absolute textbook case of [Embrace, Extend, Extinguish](https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish), Microsoft's _modus operandi_.
As a disclaimer, I have been rooting this past year for Microsoft's apparent change in stance towards Linux and the open source community, but I was wrong, and Microsoft has been kind enough to provide all the grounds to distrust the corporation even more, recently with the [MAUI debacle](https://itsfoss.com/microsoft-maui-kde-row/) (see all the [comments marked off-topic in this issue](https://github.com/dotnet/maui/issues/35)? Flagrant censoring!) and now with AppGet.
So now, let's quickly examine Microsoft's emails from the blog post.
## Embrace
> Keivan,<br>
> I run the Windows App Model engineering team and in particular the app deployment team. Just wanted to drop you a quick note to ***thank you for building appget*** — it’s a great addition to the Windows ecosystem and makes Windows developers life so much easier. We will likely be up in Vancouver in the coming weeks for meetings with other companies but if you had time we’d love to meet up with you and your team to get feedback on how we can make your life easier building appget.
**Embrace**: celebrate what people contribute to your ecosystem
## Extend
> Keivan,<br>
> it was a pleasure to meet you and to find out more about appget. I’m following up on the azure startup pricing for you. As you know we are big fans of package managers on Windows and ***we are looking to do more in that space***. My team is growing and part of that is to build a team who is responsible for ensuring package managers and software distribution on Windows makes a big step forward. ***We are looking to make some significant changes*** to the way that we enable software distribution on Windows and there’s a great opportunity (well I would say that wouldn’t I?) to help define the future of Windows and app distribution throughout Azure/Microsoft 365.<br>
> With that in mind ***have you considered spending more time dedicated to appget and potentially at Microsoft***?
**Extend**: get a foothold in people's successful contributions to your ecosystem
## Extinguish
> Hi Keivan, I hope you and your family are doing well — BC seems to have a good handle on covid compared to the us.<br>
> I’m sorry that the pm position didn’t work out. I wanted to take the time to tell you how much we appreciated your input and insights. ***We have been building the windows package manager*** and the first preview will go live tomorrow at build. We give appget a call out in our blog post too since ***we believe there will be space for different package managers on windows***. You will see our package manager is based on GitHub too but obviously with our own implementation etc. our package manager will be open source too so ***obviously we would welcome any contribution from you***.<br>
> I look forward to talking to you about our package manager once we go live tomorrow. Obviously this is confidential until tomorrow morning so please keep this to yourself. You and chocolatey are the only folks we have told about this in advance.
**Extinguish**: replace the people's contributions with your own products; it needn't be better because you're a big rich corporation with enormous reach
## Microsoft Loves Linux
Make no mistake: this aggressive pattern will continue. They like the name MAUI? They take it and silence the critics. They want a package manager because Linux has them? They "get inspired", build a new one and squash the existing solutions.
They love Linux, right? They are certainly "embracing" it on their platform when they launched the Windows Subsystem for Linux (or WSL), a tool to run Linux distributions inside Windows. It has also been a while since they started "extending" Linux and the open source community by means of [open-sourcing Powershell](https://itsfoss.com/microsoft-open-sources-powershell/) and [acquiring Github](https://itsfoss.com/microsoft-github/). Soon, WSL2 will launch with their [own Linux kernel](https://github.com/microsoft/WSL2-Linux-Kernel).
Now is the time to remain vigilant but also act. Donate to or support in any other way your favorite distribution and open source tools. Microsoft is coming.
## Final notes
I've tried to remain respectful to the content Keivan has posted in his [blog post](https://keivan.io/the-day-appget-died/). I feel sorry for the situation: he's the developer hero Windows needed but Microsoft felt we did not deserve. Hope you'll go on to make even bigger projects, Keivan, because you absolutely nailed AppGet!

+ 78
- 0
content/blog/2020-06-05--missing-entropy.md View File

@ -0,0 +1,78 @@
---
title: "The Case of the Missing Entropy"
author: Yarmo Mackenbach
slug: missing-entropy
date: "2020-06-05 12:14:57"
published: true
---
> In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. ([Source: wikipedia](https://en.wikipedia.org/wiki/Entropy_(computing)))
## Docker, are you still there?
It all started when I got myself a new VPS server for serving web content. I have a more-than-capable server at home but I'd rather not use it for "uptime-sensitive" use-cases, the odd crash still takes it down from time to time. I know, a CDN&hellip;
Sticking with what I'm comfortable with, I decided to go with a docker setup with only a few containers:
- [caddy](caddyserver.com/) as web server and reverse proxy
- [php-fpm](https://hub.docker.com/_/php) as PHP processor
- a couple of others with minor significance
As per usual, I write my `docker-compose.yaml` and it's all set. Except this time. Sometimes, when I would change something in the yaml file and run `docker-compose up -d`, it would do it immediately, as I would expect from all the times I've run it on my homelab. But sometimes, it would wait a minute or longer and only execute then.
I accepted this behavior a few times, but at some point, it had to be dealt with.
## Investigating
I noticed a few things. First, it did not seem to be due to a lack of computing resources. My [Grafana](https://github.com/grafana/grafana) dashboard (with [InfluxDB](https://github.com/influxdata/influxdb) as backend and [Telegraf](https://github.com/influxdata/telegraf) as agent) clearly showed me that CPU usage was about 1% and RAM was about 30% full. No excessive DISK IO or NETWORK IO. So we are not overwhelming the system!
Additionally, while it was waiting to execute, I could open a new SSH connection and do other stuff. With one exception: any docker-related command would not execute.
Final clue: I could not ctrl-c my way out of a pending docker command execution, but if I would close the terminal, open a new one, connect via SSH and run any new docker command, it would still wait.
Final final clue: a minute later, I could run docker commands left, right and center without a single problem. Another minute later, it might do the whole waiting again. It was very&hellip; "Random". Wink, wink&hellip;
Have you figured it out yet? I hadn't.
## Researching
With this information, I was confident enough to start searching online and I came across this [github issue](https://github.com/docker/compose/issues/6552) fairly quickly:
> "docker-compose often takes a long time to do anything"
That sounds about right!
A few comments in, [it was suggested](https://github.com/docker/compose/issues/6552#issuecomment-529787442) to run the following command: `cat /proc/sys/kernel/random/entropy_avail`.
On my VPS, this returned `52`. Whoopsie&hellip;
## Entropy
For those of you who don't know what (computing) entropy is, here's the [wikipedia article](https://en.wikipedia.org/wiki/Entropy_(computing)) for it. In short: computers are terrible at coming up with random numbers (just like humans! Topic for another day), which many applications require for their proper working.
Our operating systems have a clever way to solve this: take all input that is NOT generated by the computer itself and use that as "randomness". For example, a computer doesn't know in advance how you are going to move the cursor or which keyboard button you will press. The operating system takes these inputs, processes them to "extract the randomness" and stores it in the `entropy pool`.
Any application needing some randomness can request some random data from the `entropy pool`. Maintaining sufficient entropy is therefore a challenge in itself: process enough random data to keep up with the demand.
Apparently, Docker is an application that requires randomness. But how have I never encountered this issue before?
## VPS and entropy
On both my desktop computer and homelab, the entropy available is around `4000`, which is perfect. They are able to maintain this entropy because of all the sources of randomness available to them. Mouse and keyboard inputs, processes running in the background, etc.
Now, let's the VPS as a counter example. These things are made to be fully reproducible: every time you boot one up, they are expected to run in the same way. They are also very sealed off from the host system for security reasons: I cannot read core temperature values for my VPS. They don't have "true hardware", they get portions of hardware, shared with other VPS instances. Except for my SSH connection, the VPS has no mouse or keyboard inputs.
In other words, VPSs are severely lacking in sources of entropy. That is why the entropy available was only `52` and why docker stalled: it had to wait for sufficient randomness to occur.
[More information on VPS and entropy from DigitalOcean](https://www.digitalocean.com/community/tutorials/how-to-setup-additional-entropy-for-cloud-servers-using-haveged).
## The remedy: haveged
There is a way to remedy the situation: [haveged](https://wiki.archlinux.org/index.php/Haveged). Having only discovered it last night, I do not fully understand it yet but from what I have read, it is a pseudorandom number generator (PRNG) that fills the `entropy pool` with "pseudorandomness". Installing `haveged` immediately solved my issue, all docker commands were running instantly again.
![Available entropy suddenly increases after installing haveged](/content/img/entropy_haveged.png)
*Can you tell when I installed haveged?*
## Caveat: pseudorandomness
There is a downside to this: PRNGs are NOT random. [Wikipedia article on PRNGs](https://en.wikipedia.org/wiki/Pseudorandom_number_generator). PRNGs generate numbers that appear random but are fully deterministic: run the exact same algorithm twice, and you'll get the same "random" numbers. Therefore, VPSs may not be the perfect solutions to perform entropy-heavy tasks such as cryptography: a cryptographic key generated with pseudorandom numbers is far less secure than one generated with truly random numbers.

+ 122
- 0
content/blog/2020-06-05--website-load-performance.md View File

@ -0,0 +1,122 @@
---
title: "Optimizing the website's load performance"
author: Yarmo Mackenbach
slug: website-load-performance
date: "2020-06-05 22:47:21"
published: true
---
## My old webhosting
When I started making websites back in 2010-ish (maybe even earlier than that, I don't remember), I used shared hosting as I did not have the slightest clue about how a server worked, let alone set one up for web hosting. About two-three years ago, I switched to [Cloudways](https://www.cloudways.com/en/), allowing one to host websites on a virtual private server (VPS) and still not require any actual knowledge about the inner workings of a server.
## My new webhosting
However, I've been managing my own private server for almost two years so I felt confident I could do the hosting myself. Hip as I am (I am not), I decided to go with a [Caddy server](https://caddyserver.com/) as a [Docker container](https://www.docker.com/get-started) on a VPS hosted by [DigitalOcean](https://www.digitalocean.com/). For the low-traffic websites I currently maintain, this is largely sufficient.
I am in the process of moving each website one by one to the new hosting solution. It was time for this very website, [yarmo.eu](https://yarmo.eu) and I thought to myself:
> I should actually check if I gain any website load performance by moving to this new solution.
When asking the Fediverse, 70% predicted [Caddy would perform better than Cloudways](https://fosstodon.org/web/statuses/104285148110095796).
## Let's get testing
I decided to use [WebPageTest.org](https://www.webpagetest.org) to measure load perfomance. For each case described below, three measurements were performed and the median measurement is displayed and analyzed.
### Cloudways
First, a baseline measurement of my existing Cloudways solution.
![Cloudways - overview](/content/img/wpt_1_1a.png)
*Cloudways - overview*
![Cloudways - rating](/content/img/wpt_1_1b.png)
*Cloudways - rating*
![Cloudways - waterfall](/content/img/wpt_1_1c.png)
*Cloudways - waterfall*
So the server returns the first byte of information after 480 milliseconds. Now, I should tell you that my website is based on [Phug](https://phug-lang.com), the PHP port of [pug.js templating](https://pugjs.org). The page is rendered in real-time and apparently, that takes a little over 300 ms.
It is worth noting that any other metric is then dependent on how the website is programmed. Once Cloudways has sent over the data, it no longer has any influence on load performance.
The website if fully loaded after 923 ms. Good to know. About a second to wait for my website to load.
Over on the waterfall, we see a bunch of files being downloaded simultaneously after the HTML page is loaded. The largest asset to load is the profile picture.
Wait, what is that `F` over on the rating? Security is NOT in order! As it turns out, Cloudways does not handle security-related HTTP headers for you&hellip; I did not know that! They [recommend setting these headers in a .htaccess file](https://support.cloudways.com/enable-hsts-policy/).
Let this be a reminder to all of you: test your websites. One might learn a few tricks.
Anyway, can Caddy do better?
### Caddy
![Caddy - overview](/content/img/wpt_1_2a.png)
*Caddy - overview*
![Caddy - rating](/content/img/wpt_1_2b.png)
*Caddy - rating*
![Caddy - waterfall](/content/img/wpt_1_2c.png)
*Caddy - waterfall*
Well, as it turns out, it's largely the same performance. First byte arrived after 459 ms, but I've ran it a few times and there's really little difference between Cloudways and Caddy.
BUT! Learning from my previous mistakes, I configured Caddy to set up all the correct headers and won't you look at that, `A+` on the security score!
So that's it then?
Well, not really&hellip; I learned that my website wasn't running optimally because I forgot some basic HTTP headers. Did I forget more? In other words, can I do even better than this?
I've tried a lot of things, I'll just narrow it down to the two most important findings.
### Caddy - inline most of it
As it turned out, I had a few small SVG icons and some CSS files. I tried rendering them into the HTML page, so the data would be sent on the first data transmission and no separate requests were needed. For good measure, I also minified the CSS files which, for one file, reduced the size by 30%!
![Caddy+inline - overview](/content/img/wpt_1_6a.png)
*Caddy+inline - overview*
![Caddy+inline - rating](/content/img/wpt_1_6b.png)
*Caddy+inline - rating*
![Caddy+inline - waterfall](/content/img/wpt_1_6c.png)
*Caddy+inline - waterfall*
On the waterfall above, you can clearly see the `dank-mono.css` was not inlined but I tried multiple configurations, there was no real gain as the image also needed to load and took longer anyway. So, all in all, inlining the SVG and CSS content did little in this case.
Also, note the regression from `A+` to `A` on the security score. There was one header I couldn't quite get working properly so I had to disable that one, other than that, it's working better than it ever has.
What drew my attention for the final step was the `B`. My server responds in within 480 ms and that it still not good enough for you, WebPageTest? Ok, have it your way.
What takes my server so long to respond? Well, obviously, it must be the templating. Can I improve the template? Perhaps. But as it turns out, I don't have to! Ever heard of caching?
As described on [their website](https://phug-lang.com/#usage), PHUG has support for caching and even calling an optimized version of their renderer. So I applied both caching and optimized rendering.
### Caddy - PHUG optimization
![Caddy+PHUG - overview](/content/img/wpt_1_7a.png)
*Caddy+PHUG - overview*
![Caddy+PHUG - rating](/content/img/wpt_1_7b.png)
*Caddy+PHUG - rating*
![Caddy+PHUG - waterfall](/content/img/wpt_1_7c.png)
*Caddy+PHUG - waterfall*
Well, there it is!!! First byte of data arrived after a mere 173 ms, website is useable in less than half a second and all scores are `A`!
That's the result I was hoping for. Now on my todo list:
- Optimize the profile picture further or try SVG
- Get all HTTP headers perfect
Any comments or recommendations/optimization? Please [let me know](/contact)!
<!-- https://www.webpagetest.org/result/200604_Z3_05019c9c3f872873bd1e964474cb0dac/ -->
<!-- https://www.webpagetest.org/result/200604_E2_026978bdd64ce5830ecf5be74b634120/ -->
<!-- https://www.webpagetest.org/result/200605_M4_b728be8e608e4807af0192cefaf55f2e/ -->
<!-- https://www.webpagetest.org/result/200605_CF_96d815e43897af2723f7a4d762d76ba3/ -->
<!-- https://www.webpagetest.org/result/200605_61_67b241dc40cbaf08ed257177e6efbef0/ -->

+ 31
- 0
content/drafts/2020-02-11--coding-blog.md View File

@ -0,0 +1,31 @@
---
title: "Coding my own blog"
author: Yarmo Mackenbach
slug: coding-blog
date: "2020-02-11 21:10:45"
published: false
---
"In My Particular Use Case" (or IMPUC) is a series of short posts describing how I setup my personal homelab, what worked, what failed and what techniques I eventually was able to transfer to an academic setting for my PhD work.
<!--more-->
### Why a homelab?
I started my homelab about a year after I started my PhD. My academic work was challenging in a technical way, with new data generated every day, managing raw data, processed data, metadata. I built a number of tools that would aid me on my daily basis for my work but I needed a place to just try out every technology I could possibly need for my job. It eventually turned out that the homelab was destined to do far greater things than simply serve as a testbed but that's how it started and what provided me the knowledge and experience to solve important issues in my academic work.
### The central server
So one day, I ordered myself an Intel nuc with a 5th generation i3 processor, 8 GB of RAM, an m.2 drive and got started. Container solutions caught my attention before I even had the machine so I first installed docker and later, docker-compose. This setup hasn't changed a bit today as it still allows me to launch new services very easily by changing a single yaml file with minimal impact on the hosting machine. The first things I would install were several databases and gitea, a self hosted git service. The services sit behind a reverse proxy (traefik) to allow them to be accessed by using (sub)domains. Configuration of the machine is managed by a folder of dotfiles backed up in a git repo and `stow`ed as necessary, but I am currently looking into ansible for this purpose. A 4-bay JBOD USB3 device provides the storage that the nuc then also (partly) makes available over the local network via smb.
### The peripheral Pi's
Floating around the central server are several raspberry pi's. Back when I first started, the central server would sometimes crash or soft-lock and since my entire system monitoring system (telegraf+influxdb+grafana) was also installed on there, there was not a whole lot of investigating and fixing I could immediately do. Now, the central server and the pi's all run telegraf and a single pi now hosts the influxdb+grafana stack and only that. Another pi is acting as a media center (Kodi) and finally, two redundant pi's function as DNS forwarders (Pi-Hole), one of which also hosts my VPN solution (wireguard).
### The out-of-house computing
I have two permanent VPS's running: a website server (Cloudways) and a mail server (mailcow). Both could be hosted on the central server but as long as I cannot guarantee a perfectly stable Internet connection (which my house does not have) nor stable computing (personal budget issue), I choose to host these outside of the house.
### Final words
Thanks for reading this, more posts will come soon explaining with more depth some of the elements described above. If you have questions, you can find several ways to contact on [yarmo.eu](https://yarmo.eu).

+ 13
- 0
content/drafts/2020-03-05--plaintext-journey.md View File

@ -0,0 +1,13 @@
---
title: "My journey to plaintext journaling"
author: Yarmo Mackenbach
slug: plaintext-journey
date: "2020-03-05 18:37:00"
published: false
---
When my work started to overwhelm me, friends of mine suggested using Bullet Journaling.
Though it served its purpose to regain clarity in my head, as time passed, I needed more suited to my heavy command line usage.
<!--more-->

+ 27
- 0
content/drafts/2020-04-22--taking-back-control-of-digital-life.md View File

@ -0,0 +1,27 @@
---
title: "IMPUC #2 &middot; Taking back control of my digital life"
author: Yarmo Mackenbach
slug: taking-back-control-of-digital-life
date: "2020-04-22 21:07:22"
published: false
---
"In My Particular Use Case" (or IMPUC) is a series of short posts describing how I setup my personal homelab, what worked, what failed and what techniques I eventually was able to transfer to an academic setting for my PhD work.
<!--more-->
I'll admit upfront to my sin: I used to defend Google while bashing other big corporations for attempting to control my life. Boy, have I wisened up. The long story short is of course plain and simple: any corporation big enough will not have your best interest at heart and why would they? Google follows your every step and purchase, Facebook manipulates timelines to influence emotions and God only knows what Microsoft was thinking with that search bar.
Like any organism, corporations evolve and adapt to survive. Unfortunately, often at our detriment. But times have changed and we no longer *need* to take it. It is worth discussing at length how one can take back control of one's digital life, but many, many, many have done so already. I will just write up how I've implemented some changes in my life to improve my overall digital well-being.
Caveat: everything discussed here is "server software"-related: sure I use Firefox for browsing and (K)Ubuntu as much as possible, but "client software" will be discussed in a different post. This doesn't mean it only applies to tech-savvy people: with as little as a Raspberry Pi 4 or a Rock 64, it is possible to host some if not all the solutions explained below.
## As a social being
Social network: well, that's tricky, isn't it? You can't host your own network and expect the whole world to talk to you there&ellipsis; Or can you have your cake and eat it? Federation to the rescue! (Detailed explanation here.) Not only does federation connect multiple communities, it even connects entire social networks. That means that if you host in your home your own little node, you can communicate with the rest of the entire network and your login credentials wouldn't even leave the house. I used to be active on the fediverse this way, though nowadays, I have found a (community)[https://fosstodon.org] that I love and trust and therefore no longer felt the need to keep this one in-home.
Something I take no
## As a developer
## As a music enthusiast

+ 15
- 0
content/drafts/2020-05-02--indieauth-pgp.md View File

@ -0,0 +1,15 @@
---
title: "How to get IndieAuth PGP authentification working"
author: Yarmo Mackenbach
slug: icann-rejects-sale-org
date: "2020-05-01 08:53:37"
published: true
---
`#100DaysToOffload >> 2020-05-02 >> 008/100`
https://indielogin.com/
https://indieauth.com/pgp
https://xato.net/rel-publickey-and-rel-pgpkey-specification-7c42e588d5f2

+ 62
- 0
content/drafts/2020-05-06--web-designed-for-you.md View File

@ -0,0 +1,62 @@
---
title: "The web, designed for you"
author: Yarmo Mackenbach
slug: web-designed-for-you
date: "2020-05-05 21:49:58"
published: false
---
`#100DaysToOffload >> 2020-05-07 >> 013/100`
Today, something in me clicked. Granted, I did not discover anything new and people smarter than me are already solving the issue I will describe, but today, for me, everything aligned and it just clicked.
## What happened before: the dark theme discussion
A few days ago, I wrote a blog post and was quite unhappy with the way it looked: it turned to be paragraphs that were too long and not enough section titles in between them. But I thought it was poorly themed (by me) and I learned a valuable lesson: [a dark theme is actually worsening the experience for some](https://uxdesign.cc/accessibility-and-dark-ui-themes-f01001339b65). I know people had preferences about dark/light theming but what I did not realize before was that some people have a harder time reading text in a dark theme rather than in a lighter theme.
I personally love a dark theme but I could not bear the thought that because I designed to my personal taste, my aesthetical choices could negatively impact those who were kind enough to spend some of their valuable time on my website.
My immediate reflex was to start thinking about implementing a theme selection mechanism. Problem is: I don't want any javascript on my website. I don't mind javascript but I want to make a statement: my website is basic, it doesn't need to do anything other than displaying text so no javascript is needed. If it's not needed, don't use it.
Problem is: how can my website remember one's theme preference without a minimal form of scripting? Switching a toggle on every page you visit is a no-go. PHP sessions are overkill. There's has to be something smarter.
## What happened before: the animation comment
The other thing that led to today actually also happened today: Kevin published a blog post on [Adding A Scroll To Top Button Without JavaScript](https://kevq.uk/adding-a-scroll-to-top-button-without-javascript/) which describes exactly with the title says. It even included an animated scroll. No javascript involved.
This got posted on [Lobste.rs](https://lobste.rs/s/uxhrp5/adding_scroll_top_button_without) and there, a [comment](https://lobste.rs/s/uxhrp5/adding_scroll_top_button_without#c_hmhpsc) immediately caught my attention.
> Fwiw, I think I prefer the abrupt jump.
> I’ll take my 200ms of time over the slick experience.
A walk to clear my head later and it all clicked. The answer to all of this: Level 5 Media Queries.
## My website, designed specifically for you
For me, web design has always been a form of art. Sure, practicality is central as a website has function, but the designer is free to shape the website to his or her heart's content. You are currently on my website. I designed it and shaped it to represent what my ideal website would look like. Sure, every website needs its own character but I would personally love to see more websites look a little more like mine. That's why I made it this way.
This has always been my thinking. Until today. Yes, this is my website, but I'm shouldn't design it for myself, I should be designing it for you. No, not all of you. You, the individual who I feel honored to have as a visitor on my little part of the internet. You come here, I should accommodate the place to make you feel at ease. But how?
I could include a dark/light theme switch. But what if you need high contrast to better read text? Just an extra switch and some styling, no problem. Is my website annoying you by having transitions? Slap one more switch on that page. You know what, I'll make a preferences page.
I am sure by now, you will find your stay a little more pleasing. Once you have finished reading the article, you'll be on your way to the next website. If you are lucky, you get to visit another preferences page first, adapt that website once again to your liking and then enjoy the content. If you're unlucky, well, all you get is what the designer has chosen for you.
## Media queries to the rescue
If only there was a smart layer between the user and the websites.
Of course that layer exists: the browser. If you could tell the browser you prefer light themes with high contrast and no animations and the browser could relay this information to all the websites, there would be no need for javascript, no need for preferences pages and absolutely no need to manually adapt each individual website to what you need.
This is where a brilliant solution comes in: [Level 5 CSS Media Queries](https://drafts.csswg.org/mediaqueries-5). The user sets preferences in the browser and websites can query those preferences and adapt the design to the results of those queries.
Check the value of `prefers-color-scheme` to know if the user wants a light or a dark theme. The same goes for `prefers-contrast` and `prefers-reduced-motion`. A user could even relay their preference regarding transparency (`prefers-reduced-transparency`).
This is my website, designed by me, with a user experience tailored to you, my precious visitor.
## My intent
I hereby state my intent to include these Level 5 Media Queries as soon as possible on my website. I will also advocate for their use whenever it is appropriate.
It's all still experimental technology and [support](https://caniuse.com/#search=prefers) needs to improve still. Also, user preferences as detailed as these pose obvious risks for fingerprinting.
Nonetheless, I hope this draft gets improved upon and will be accepted in the near future. Making the web more accessible for all will be greatly aided by tools like these, resulting in a more pleasant experience for both the designer and the end user.

+ 36
- 0
content/foss/foss.yaml View File

@ -0,0 +1,36 @@
- url-repo: https://github.com/influxdata/telegraf
url-item: https://github.com/influxdata/telegraf/pull/7585
repo: influxdata/telegraf
title: Add support for Solus distribution
id: 7585
state: merged
- url-repo: https://github.com/home-assistant/brands
url-item: https://github.com/home-assistant/brands/pull/908
repo: home-assistant/brands
title: Add NS icons and logos
id: 908
state: merged
- url-repo: https://github.com/home-assistant/core
url-item: https://github.com/home-assistant/core/pull/31623
repo: home-assistant/core
title: Handle incorrect config for Nederlandse Spoorwegen integration
id: 31623
state: merged
- url-repo: https://github.com/aquatix/ns-api
url-item: https://github.com/aquatix/ns-api/pull/21
repo: aquatix/ns-api
title: Update API URLs to conform with NS 2020 API
id: 21
state: merged
- url-repo: https://github.com/home-assistant/core
url-item: https://github.com/home-assistant/core/pull/30599
repo: home-assistant/core
title: Update NSAPI to version 3.0.0
id: 30599
state: merged
- url-repo: https://github.com/cortex-lab/allenCCF
url-item: https://github.com/cortex-lab/allenCCF/pull/34
repo: cortex-lab/allenCCF
title: Fixed duplicate wireframe plots
id: 34
state: merged

BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


BIN
View File


+ 23
- 0
content/notes/2020-04-25--100-days-to-offload.md View File

@ -0,0 +1,23 @@
---
title: "#100DaysToOffload"
author: Yarmo Mackenbach
slug: 100-days-to-offload
date: "2020-04-25 11:14:33"
published: true
---
`#100DaysToOffload >> 2020-04-25 >> 001/100`
On [Fosstodon](https://fosstodon.org), [@kev](https://fosstodon.org/@kev) wrote a [toot](https://fosstodon.org/web/statuses/104053977554016690) which started [#100DaysToOffload](https://fosstodon.org/tags/100DaysToOffload), a challenge to blog for 100 days about anything. Enthusiastic about this idea, I'm starting today and decided to make a continuously updated list about the other blogs participating in the challenge.
[Beyond the Garden Walls](https://write.privacytools.io/darylsun/)
[Sulairris](https://write.as/sulairris/)
[G's Blog](https://blog.marcg.pizza/marcg/)
[Freddy's Blog](https://write.privacytools.io/freddy/)
[Roscoe's Notebook](https://write.as/write-as-roscoes-notebook/)
[Nathan's Musings on the Web](https://degruchy.org/)
[Gregory Hammond](https://gregoryhammond.ca/blog/)
[Garron](https://www.garron.me/en/blog/)
[Secluded Site](https://secluded.site/)
Want to find even more participating blogs and links to every post? Search for the `#100DaysToOffload` hashtag on the fediverse ([Fosstodon link](https://fosstodon.org/tags/100DaysToOffload)).

+ 19
- 0
content/notes/2020-04-26--gaming.md View File

@ -0,0 +1,19 @@
---
title: "Gaming to relax"
author: Yarmo Mackenbach
slug: gaming
date: "2020-04-26 16:20:27"
published: true
---
`#100DaysToOffload >> 2020-04-26 >> 002/100`
Today hasn't been the smoothest of days and though I got ideas for a few more blog posts, I do not currently have the mental energy to work on any of them.
So instead, allow me to list a few games which tend to me help me relax a bit, one of which I'll start up right after writing this post:
- Rocket League (great for both casual and competitive, usually I play with my two brothers)
- Post Scriptum (great for "relaxation through immersion")
- Deadside (great for "relaxation through immersion")
I play others as well, though these are nowadays my go-to's. If you happen to play any of these, [contact me](/contact) and let's play together, that always enchances the experience!

+ 33
- 0
content/notes/2020-04-27--pc-build.md View File

@ -0,0 +1,33 @@
---
title: "Building my first PC"
author: Yarmo Mackenbach
slug: pc-build
date: "2020-04-27 13:56:18"
published: true
---
`#100DaysToOffload >> 2020-04-27 >> 003/100`
While working in the lab for my PhD, I needed a good computer. It didn't need to be exceptional and though I did lots of biological and physics computation, I knew that GPU acceleration wasn't needed so that eliminated the need for complicated builds. I went with a NUC.
Two years ago, I started my homelab. All I needed was a relatively simple PC that I wouldn't mind leaving turned on permanently. I opted for a NUC.
Then I needed a PC I could use at home, either to do some more work or play some game. Not expecting great gaming results, I still chose a NUC.
Those "not great gaming results", I got! The 7i7 has a built-in GPU and games can definitely be played on it, but it struggled with reliability for competitive gaming. This year, that's all changing. I have built my own PC for the first time, not only allowing me to play games in a more comfortable way, this will also be my new work-at-home computer as well as being extremely performant for video editing and music mixing (thank you, foam-padded case!).
I opted for a [AMD Ryzen 5 3600](https://www.amd.com/en/products/cpu/amd-ryzen-5-3600) on a [Asus PRIME B450M-A](https://www.asus.com/Motherboards/PRIME-B450M-A/) motherboard paired with a [AMD RX580](https://www.amd.com/en/products/graphics/radeon-rx-580) GPU. OS and software goes on an NVMe m.2 drive, games on a SATA SSD, data on a 2TB HDD. 16GB of DIMM DDR4 RAM.
My [userbenchmark](https://www.userbenchmark.com/UserRun/27232925):
- UserBenchmarks: Game 67%, Desk 123%, Work 96%
- CPU: AMD Ryzen 5 3600 - 92.5%
- GPU: AMD RX 580 - 60.8%
- SSD: Kingston SA2000M8250G 250GB - 241.9%
- SSD: WD Green 240GB (2018) - 56.7%
- SSD: WD Green 240GB (2018) - 51.5%
- HDD: Seagate Barracuda 2TB (2018) - 101.7%
- RAM: Corsair Vengeance LPX DDR4 3200 C16 2x8GB - 83.4%
- MBD: Asus PRIME B450M-A
Man, I love team red.

+ 17
- 0
content/notes/2020-04-29--missed-a-day.md View File

@ -0,0 +1,17 @@
---
title: "Missed a day"
author: Yarmo Mackenbach
slug: missed-a-day
date: "2020-04-29 09:02:57"
published: true
---
`#100DaysToOffload >> 2020-04-29 >> 004/100`
Well, that was fast. I missed my first day in the #100DaysToOffload challenge. I am not one to make up excuses and reasons why this has happened.
Though I am not planning to share the layout of my entire day yesterday in this blog post, I will write a little bit about an issue I have been facing lately: memory problems. After talking with experts, this is apparently a common issue people face after prolonged exposure to stressful situations. As a reference, I never had problems remembering things before the PhD, sure my memory was not the best out there, but it served me well. Nowadays, I do tend to forget some things on a daily basis unless I write them down immediately. Well, I will still forget them but at least I'll have an indelible reminder. I remembered at multiple occasions yesterday to write a blog post, but I kept forgetting it a bit later and I didn't make a note of it, so&hellip;
I cannot wait for this to be over. Until then, I will try something new to help me specifically with #100DaysToOffload: I will leave a fully charged Thinkpad by my bedside in the evening, and first thing in the morning, I will write my blog post for that day.
Let's try that :)

+ 17
- 0
content/notes/2020-04-29--typography-ellipsis.md View File

@ -0,0 +1,17 @@
---
title: "Typography &middot; Ellipsis"
author: Yarmo Mackenbach
slug: typography-ellipsis
date: "2020-04-29 21:21:43"
published: true
---
`#100DaysToOffload >> 2020-04-29 >> 005/100`
I like typography and exploring the stories behind special characters. Today, I'd like to talk about one that many use frequently, myself included, but often not in the "digitally correct" way (IMHO).
Yes, I'm talking about the ellipsis. Symbolised by three consecutive dots, it signals that a sentence was cut short and the reader can finish it in his or her head by knowing the context. Surrounded by brackets, it is used to signal a passage was ommitted but the meaning of the remaining sentence is unaltered by that omission. Messaging apps use it to signal the other person is writing.
You may or may not know this, but both on our computers and on our phones, the ellipsis is actually a special characters which can be used instead of writing three separate dots. On a phone, it's accessible under one of the keys by long-pressing on it. On the computer, I usually just copy-paste it, but on Ubuntu, it's inserted by pressing `ctrl+shift+u`, then typing `2026` followed by an `enter`. On Windows, it's inserted by pressing `alt + 0 1 3 3` on the numpad. In both HTML and markdown, it's inserted by writing `&hellip;`.
[Wikipedia](https://en.wikipedia.org/wiki/Ellipsis)

+ 19
- 0
content/notes/2020-05-01--icann-rejects-sale-org.md View File

@ -0,0 +1,19 @@
---
title: "A response to ICANN's refusal to sell .ORG"
author: Yarmo Mackenbach
slug: icann-rejects-sale-org
date: "2020-05-01 09:33:40"
published: true
---
`#100DaysToOffload >> 2020-05-01 >> 007/100`
A response to [ICANN's refusal to sell .ORG](https://www.icann.org/news/blog/icann-board-withholds-consent-for-a-change-of-control-of-the-public-interest-registry-pir) in 3 movements.
My first reaction was sarcastic when I saw the cheer on social media: "look at us celebrating like there's no tomorrow because a non-profit organisation chose to NOT sell a TLD made for non-profit organisations to a for-profit corporation".
But they indeed chose not to. They really chose not to. They didn't do it. The people spoke and the people won. The powers that be got greedy, misread the room and adjusted their path because and only because of the people. A great, great thanks to all who wrote letters to the California Attorney General and made their voices heard online. This is a victory for all.
Today, we celebrate. Unfortunately, tomorrow, we need to think about what happens next. The internet is still under threat. A group of people have full power over what the internet looks like and they have shown us to be untrustworthy. For each domain we buy, we pay an ICANN fee, yet ICANN has made it clear that they do not have our interests at heart. Stay safe outside and vigilant on the web.
If within your possibilities and beliefs, please support the [OpenNIC project](https://www.opennic.org/) (no affiliation, just a fan), a "user-owned and -controlled DNS root offering an alternative to ICANN and the traditional TLD registries".

+ 18
- 0
content/notes/2020-05-04--break-from-raid.md View File

@ -0,0 +1,18 @@
---
title: "Taking a break from raid"
author: Yarmo Mackenbach
slug: break-from-raid
date: "2020-05-04 18:43:31
"
published: true
---
`#100DaysToOffload >> 2020-05-04 >> 010/100`
I have three main hard drives in a [snapraid](http://www.snapraid.it/) setup in my NAS and a few extra drives for backup. All drives are connected to the server (NUC) via a JBOD USB drive case. I love snapraid, it has served me well and most certainly will in the future.
But now, I need the drive space more than I need a solution for my data to continue to being served while a drive has died. As we all know, raid is not a backup, it's a solution to ensure the data is available while one or more drives are not. Perfect for critical applications, but let's be honest, my homelab is not, especially with me sitting 24/7 next to it.
Thus soon, when I have saved a bit more, I will expand my homelab to a larger array of drives, all connected directly through SATA and all raided using snapraid with ample backup capacity. That time is unfortunately not now. So out goes snapraid and in goes the full capacity of my third drive.
They are WD Red 6TBs. Yes, I have checked, they are CMR. And yes, these are the last drives I'll ever buy from WD.

+ 15
- 0
content/notes/2020-05-05--varken.md View File

@ -0,0 +1,15 @@
---
title: "Varken: Plex monitoring solution"
author: Yarmo Mackenbach
slug: varken
date: "2020-05-05 21:49:58"
published: true
---
`#100DaysToOffload >> 2020-05-05 >> 011/100`
Today, I discovered [Varken](https://github.com/Boerderij/Varken), a neat solution to monitor your Plex ecosystem (including Sonarr, Radarr, etc.) and store the data in your InfluxDB instance. This solution is a great addition as I can now make Grafana or Chronograf dashboardz encompassing both server metrics and Plex metrics. The reason this is important is that I have a relatively low-power server (NUC) and a single stream on Plex can have a noticeable impact on the CPU usage.
Varken requires a [Tautulli](https://tautulli.com/) instance to collect the data from as well as a [MaxMind](https://www.maxmind.com) API key which unfortunately isn't optional. I run all software mentioned in this post in separate docker containers.
Also, 11th post for #100DaysToOffload today and 11 is my lucky number :)

+ 17
- 0
content/notes/2020-05-06--search-engine-indexing.md View File

@ -0,0 +1,17 @@
---
title: "Search engine indexing: DDG vs Google"
author: Yarmo Mackenbach
slug: search-engine-indexing
date: "2020-05-06 10:10:45"
published: true
---
`#100DaysToOffload >> 2020-05-06 >> 012/100`
Having my own website means I get to control what happens on a tiny tiny part of the internet; it's my space. More importantly, I want to have a bit of control about what people see when they decide to put my name in a search engine. This is an important reason to have a website in the first place: I don't believe anyone would want their Facebook page to be their first impression, or anything the search engine decides to put first.
Months ago, I did a little test, searched my name in both Google and DuckDuckGo, didn't see my website which I just started, didn't think too much of it and went on with my life. Yesterday, I checked again. Let's compare the experiences.
Without any of my input, DuckDuckGo had found my website and it's the first thing anyone sees when searching for my name: mission accomplished. On Google, my website was not on the first page. Or the second. Or the third. After looking around in their "Webmaster Tools", I found out they had never figured out my website existed. I had to manually request the indexing which they say will be done at some point. Couldn't request an indexing without a good ol' game of finding crosswalks in a never-ending series of small images presented in a 3x3 grid.
In your opinion, what is the better experience?

+ 13
- 0
content/notes/2020-05-07--homelab-crashed.md View File

@ -0,0 +1,13 @@
---
title: "My homelab crashed, time for a break?"
author: Yarmo Mackenbach
slug: homelab-crashed
date: "2020-05-07 19:53:23"
published: true
---
`#100DaysToOffload >> 2020-05-07 >> 013/100`
It's not the first time my homelab has crashed and it won't be the last time. Something with the hard drives. I'll figure it out, no doubt. But despite needing it for various services throughout my daily routine, I have decided to let the homelab rest for a few days maybe.
It has been running almost non-stop since I started it about two years ago, I never made major changes, always gradually improved upon it. Now, the time may have come to take a hard look at what I started with, what I ended up with, learn a few valuable lessons and perhaps start over. My homelab could use a 2.0 moment.

+ 13
- 0
content/notes/2020-05-08--deletekeybase.md View File

@ -0,0 +1,13 @@
---
title: "Time to #DeleteKeybase"
author: Yarmo Mackenbach
slug: deletekeybase
date: "2020-05-08 11:54:54"
published: true
---
`#100DaysToOffload >> 2020-05-08 >> 014/100`
If you are reading this, there's a big chance you already heard the news: Zoom acquired Keybase. Whether you liked it from the beginning or not, I think most can agree that after the acquisition, there's no more reason to trust the platform and thus to use it. What happens to our keys now is anyone's guess.
Luckily, I had the precaution to never upload my private keys, so I all had to do was donate the remainder of my stellar coins to good causes (such as [Tails](https://tails.boum.org/donate/)), press the [big red button](https://keybase.io/account/delete_me) and remove any links to them from my website.

+ 31
- 0
content/notes/2020-05-12--notes-section.md View File

@ -0,0 +1,31 @@
---
title: "A place for notes"
author: Yarmo Mackenbach
slug: notes-section
date: "2020-05-12 22:57:59"
published: true
---
`#100DaysToOffload >> 2020-05-12 >> 017/100`
## The #100DaysToOffload challenge
Participating in the #100DaysToOffload is fun and encourages to think less and do more when it comes to blogging. That last part both sounds good and bad.
It's good because more content is actually published, it discourages one to keep a post in a "draft" status for an indeterminate amount of time and, well you know how that goes, the post never gets published. It teaches you a habit of working in a permanent cycle of thinking, writing, posting and moving on to the next cycle.
But the drawback is two-fold. Content quality can be diminished. I have noticed I'm not always content with the phrasing of certain sentences. I also regularly get reminded that a post lacks certain disclaimers or counter-arguments to the main rationale.
The other issue I'm currently facing is flooding. I see my personal website as having a professional utility as well: I'd like to point potential employers to my blog so that they can get a real sense of how I think and what I am good at. Administering a homelab, keeping DNS records, thinking about social structures on the internet, etc. I'd like for that "long-form" content not to be drowned out by waves of "short-form" posts because of a challenge.
## The solution
I considered tags and though I definitely need them, they are not the solution. The default view would still contain all the posts. Also, I'm not looking forward to making a RSS feed based on (excluding) tags.
Inspired by [Kev](https://fosstodon.org/@kev) and a discussion with [Ali Murteza Yesil](https://fosstodon.org/@murtezayesil) (thanks again :D), I've decided to implement a [notes](/notes) section meant to contain all the short-form posts. Random thoughts go in the [notes](/notes), elaborate thoughts go in the [blog](/blog). A separate RSS feed will be implemented very soon. A note could also be a link to a blog post.
## Continuing the challenge
I will continue the challenge with posts being either a blog post or a note. I will, however, refrain from posting every day. Some days are devoid of post-worthy thoughts, some days do not allow for proper writing time. I will not write notes in advance, that defeats the purpose of the challenge.
I'm already noticing benefits from participating: I take more time to write, I post more and that leads to me having more interesting discussions. I am thankful for its existence but will also adapt my participation to my lifestyle and schedule.

+ 26
- 0
content/notes/2020-05-18--mailvelope.md View File

@ -0,0 +1,26 @@
---
title: "Mailvelope: PGP for all"
author: Yarmo Mackenbach
slug: mailvelope
date: "2020-05-18 16:29:21"
published: true
---
`#100DaysToOffload >> 2020-05-18 >> 020/100`
[PGP](https://en.wikipedia.org/wiki/Pretty_Good_Privacy) is a "pretty good" way of encrypting messages and files but gets often criticised for being too cumbersome to work with, which is sadly true. To counter this, certain products and services use PGP internally and provide an easy-to-use interface. Take Protonmail who uses [PGP to automatically encrypt emails between protonmail addresses](https://protonmail.com/support/knowledge-base/how-to-use-pgp/).
Handy, but we are forgetting something. If the PGP protocol is the lock, then the PGP keys are, well, the keys. Protonmail has both the lock and the key on their servers. That's not secure…
Luckily, there are more tools, like [Mailvelope](https://www.mailvelope.com/en/) ([source code](https://github.com/mailvelope/mailvelope)). It's nothing more than a browser add-on, meaning it will automatically work with any webmail service out there. Encrypting your emails becomes very simple (I also have a [more detailed guide](https://yarmo.eu/contact#mailvelope)).
- Load the recipient's public key in Mailvelope
- Open your webmail service
- Click the pink Mailvelope logo
- Choose the key of the recipient
- Write the email
- Click encrypt and send the email
That is actually quite easy and feasible for the less tech-savvy people.
But keep in mind (the usual email/PGP disclaimer): email is inherently insecure. Email metadata (including title!) is not encrypted, only the body is. Information about your secret communication can be infered from the metadata. Though PGP-encrypted emails are nice to have, truly private communication is achieved using [encrypted instant messengers](https://www.privacytools.io/software/real-time-communication/).

+ 21
- 0
content/notes/2020-05-19--bf1-revival.md View File

@ -0,0 +1,21 @@
---
title: "Battlefield 1 Revival"
author: Yarmo Mackenbach
slug: bf1-revival
date: "2020-05-18 23:59:59"
published: true
---
`#100DaysToOffload >> 2020-05-19 >> 021/100`
Since it was announced that Battlefield V will stop receiving earlier than expected, the general feeling in the Battlefield community has been to play the older titles in the series. After all, the game is still not fun to play and knowing that will be no brighter future, why bother?
I've played Battlefield 1 a few times lately, but I have noticed only today that there was a "Back To Basics" game mode loaded on many servers. And it is a game changer, no pun intended.
Battlefield games are crazy. Massive amounts of infantry, vehicles, planes, all at the same time. But recently, I've been enjoying a more tactical approach to the game genre that are best represented by Post Scriptum and Hell Let Loose. No running around in those games, it's all about teamplay, intelligence and tactical movement.
The new (reintroduced?) "Back To Basics" game mode in Battlefield 1 completely changes the games and almost turns it into a tactical shooter. Vehicles cannot be used and all infantry use one and the same rifle that historically was used by their faction. Not only is this immersive, the lack of excessively powerful machine guns makes the game much more reliant on flanking and proper teamplay. It is hard for individuals to excel, they can't use their favorite weapon of choice optimised for clearing an entire room. You need your teammates now.
Only downside: the base game obviously wasn't designed for such a game mode and after playing a few rounds of Grand Operations, I've yet to see an attacking team win.
For some casual teambased shooting with a tactical twist, Battlefield 1 has become an excellent choice. Unlike Battlefield V, I just cannot see this game phase out of popularity anytime soon.

+ 13
- 0
content/notes/2020-05-21--smh.md View File

@ -0,0 +1,13 @@
---
title: "SMH"
author: Yarmo Mackenbach
slug: smh
date: "2020-05-21 09:42:23"
published: true
---
`#100DaysToOffload >> 2020-05-21 >> 022/100`
SMH mean "shaking my head".
You probably already know this, but one of my idiosyncracies is that I just cannot remember the meaning of that acronym, no matter how hard I try.

+ 19
- 0
content/notes/2020-05-22--lunasea.md View File

@ -0,0 +1,19 @@
---
title: "LunaSea: FOSS FTW"
author: Yarmo Mackenbach
slug: lunasea
date: "2020-05-22 19:47:38"
published: true
---
`#100DaysToOffload >> 2020-05-22 >> 023/100`
## Out with the old
A couple of weeks ago, I finally discovered a FOSS alternative for nzb360, a great app for managing Plex, Radarr, Sonarr, etc. I wish to have kept using nzb360, but unfortunately, the app relies too heavily on Google Services and though I have paid for it, I can no longer use it as my LineageOS phone can't process purchases made on official Google Android phones.
## In with the new
Named [LunaSea](https://www.lunasea.app), it can do anything it should, (manage Sonarr, Radarr, Lidarr, NZB clients), it looks fantastic, it's available for both Google Android and iPhone and, of course, it's [FOSS](https://github.com/LunaSeaApp/LunaSea).
Only thing I'm missing is a donation button. And a fediverse account :)

+ 15
- 0
content/notes/2020-05-23--projects-section.md View File

@ -0,0 +1,15 @@
---
title: "A new Projects section"
author: Yarmo Mackenbach
slug: projects-section
date: "2020-05-23 22:51:43"
published: true
---
`#100DaysToOffload >> 2020-05-23 >> 024/100`
I've added a new [Projects](/projects) section to my personal website, the new home for projects I'm either still thinking of doing or actually developing, As these projects will be open-source, so will my preparation for them.
The benefit of doing this is that when you look around and see a project you like or have experience with, I would love for you to [contact me](/contact) to work together.
As of today, there are only two projects listed, I have more in my head which I will write down over the coming days.

+ 41
- 0
content/notes/2020-05-25--ending-100-days-to-offload.md View File

@ -0,0 +1,41 @@
---
title: "Ending #100DaysToOffload"
author: Yarmo Mackenbach
slug: ending-100-days-to-offload
date: "2020-05-25 16:57:10"
published: true
---
`#100DaysToOffload >> 2020-05-25 >> 025/100`
Today, I'm ending my participation in the #100DaysToOffload challenge at precisely a quarter of the way. I'm happy to have been part of it as it has given me much.
I only had a handful of blog posts when I started my personal website, scattered over a period of multiple months. I didn't write much, I didn't take the time for it and more importantly, I didn't see the point. It was an interesting experience, for sure, but what else? Was it just writing for writing sake?
Along came the #100DaysToOffload challenge. I joined the minute I saw the first toot by Kev and immediately wrote about, well, participating in the challenge.
## Benefits
Twenty-five posts later, I learned a great deal. Forcing myself to post something every day taught me writing doesn't have to be a long and tedious process. Quite the opposite, it forced my perfectionnist brain to settle for "good enough" content.
Posting links to my blog posts (and later, notes) on the fediverse has sparked on several occasions interesting debates involving interesting people with different interesting views. This must have been the most rewarding benefit of all.
I am grateful to have learned this and I will go forth on the path I am now walking, posting regularly about all things that interest me and having eye-opening conversations. I would also have continued the challenge, were it not for a few downsides.
## Drawbacks
First and foremost, the requirement to post every day. I know, I know, it didn't have to be every day. However, skipping every other day would drag this challenge to 200 days and also goes a bit against the whole idea behind it.
I am already mentally exhausted from my recent PhD experience. Although the experience of writing is freeing, there is definitely the possibility of having "too much of a good thing". Not writing every day also gives a feeling of failure as I'm letting myself down for not keeping up. And that is just something I could definitely do without right now.
Posting this much content also dilutes the pool of topics and results in slightly lower quality content. I've talked about this before and is somewhat the purpose of the challenge: just write and publish, quality comes with experience, not from delaying posts for weeks while endlessly fine-tuning every word.
The thing is, I also have this blog for a more serious reason, to showcase my capacity for reasoning and tech skills where my educational background is somewhat lacking. Sure, having done a PhD in Neuroscience is cool but that doesn't tell you (a future employer?) that I have experience with containers and networks and FOSS and… You get the point.
In an unpredictable turn of events, the challenge is now holding me back in a way: I feel guilty when not writing and when I do write, it's often a simpler topic just to get something out there, leaving me with less time to dig into the stuff I now really want to write about.
## In the end
So there you have it. I would love to post every other day and I will. But with no obligations or reasoning. Just because I want to.
I will now dive deeper into the stuff I am passionate about and with more vigor and regularity. And that, I owe to the #100DaysToOffload challenge.

+ 33
- 0
content/notes/2020-06-01--invidious.md View File

@ -0,0 +1,33 @@
---
title: "Invidious"
author: Yarmo Mackenbach
slug: invidious
date: "2020-06-01 13:05:58"
published: true
---
Small acts of resistance are all we need. Together, we make change.
## Compliance: YouTube
Everyone knows YouTube. It contains more than enough content to keep you entertained for a couple of lifetimes.
The thing is, it's owned by Google and has enough privacy-invading trackers and ads to follow and pester you during all of these lifetimes.
## Resistance: Invidious
Please consider using Invidious ([github repo](https://github.com/omarroth/invidious)), a free and open source service that sits between you the user and the YouTube servers. It eliminates ads, does not use YouTube APIs and has many features YouTube should also always have had (audio-only mode? Yes please).
Several [instances](https://github.com/omarroth/invidious/wiki/Invidious-Instances) are hosted around the world, make sure to visit the nearest to you for the best experience.
## Going beyond
But you can go further. When using Firefox, install the [Invidition](https://codeberg.org/Booteille/Invidition/issues) addon to automagically redirect YouTube links to Invidious (again, make sure to select the closest instance). On Android, install [UntrackMe](https://www.f-droid.org/en/packages/app.fedilab.nitterizeme/) to do the exact same thing, YouTube links will be opened in Invidious-compatible apps such as [NewPipe](https://f-droid.org/en/packages/org.schabi.newpipe/).
## Drawbacks
The main issue is that you are no longer supporting the content creators, which is a big issue. It's easy to say "they shouldn't be relying on YouTube and ad revenue" and I agree with that statement to some degree, but you'll still be sad when your favorite content creator quits.
Try and make contact with them, if they're small this might be feasible, if they're big then you probably don't have to worry about them quitting anyway. Ask them and push them towards accepting other methods of donation.
And then donate.

+ 18
- 0
content/notes/2020-06-08--nuc-fan-cleaning.md View File

@ -0,0 +1,18 @@
---
title: Friendly reminder to clean your NUC's fan
author: Yarmo Mackenbach
slug: nuc-fan-cleaning
date: "2020-06-08 13:35:15"
published: true
---
[Intel NUCs](https://www.intel.com/content/www/us/en/products/boards-kits/nuc.html) make for some great low-entry-barrier low-power-consumption servers and homelabs. I have three NUCs at home, two of which have played a server role. They span several generations: a 5i3, a 7i7 and a 8i5.
And they all have one thing in common: sooner or later, their fans clog up with dust, they heat up, they make more noise and perform worse.
If you haven't cleaned the fan in a while, your best bet is to open the NUC up and clean the fan and the exhaust.
To prevent having to open a NUC up too often, I bought a few cans of compressed air and regularly blow air through the device. I'm also looking into placing air filters near the air intake.
![NUC cools down when fan is cleaned](/content/img/nuc_temp_fan_cleaning.png)
*Can you tell when compressed air was applied to the NUC?*

+ 15
- 0
content/notes/2020-06-10--avatar.md View File

@ -0,0 +1,15 @@
---
title: About my avatar
author: Yarmo Mackenbach
slug: avatar
date: "2020-06-10 22:18:19"
published: true
---
Every so often, I get asked about the origin and make of my avatar, as seen on my [website](/) and my [Fosstodon profile](https://fosstodon.org/@yarmo/). So, here it is.
Inspired by the avatars of [Kev@fosstodon.org](https://fosstodon.org/@kev) and [Mike@fosstodon.org](https://fosstodon.org/@mike), both drawn by Kev, I decided to draw my own in a similar style. The keen-eyed among you will indeed spot a few differences in design.
I used [Inkscape](https://inkscape.org/) to draw over a photo of mine, simple vectors only, no special brushes required.
Due to the similarity, I did ask Kev to confirm he had no objections to me using this avatar as my profile picture. Other than using their avatars as stylistic references, there are no other links between my avatars and theirs, their owners or the [Fosstodon instance](https://fosstodon.org).

+ 22
- 0
content/notes/2020-06-11--plausible-start.md View File

@ -0,0 +1,22 @@
---
title: Start of the Plausible experiment
author: Yarmo Mackenbach
slug: plausible-start
date: "2020-06-11 12:01:57"
published: true
---
During the roughly 6 months since I started this website, I have not been using any website statistics whatsoever. I did not see the point of it, this website was not designed to gather an audience in any fashion, it was primarily meant to be a permanently-updated online CV. Given that I am leaving academia which I have been preparing for during the last nine years, I figured I could use any means of getting my name out there.
Recently, I have taken an interest in blogging about selfhosting, online privacy and related technical subjects. In an attempt to understand if people see these articles or any other section of my website, I will start an experiment gathering statistics using the privacy-friendly [Plausible](https://plausible.io).
## The Plausible experiment
In a month or so, I will look back at the data gathered and see if anything of interest can be learned. The danger is that when the observation is made some articles perform better than others, the writing process is consequently changed to conform to what the statistics say performs best.
This is not my intention for the simple reason that this blog is not made to target a specific audience but rather to serve as an outlet for things I learn and interest me. If I notice my writing behavior change due to insights gained by statistics, the experiment is ended.
## A comparative experiment
In the near-future, I will also compare what can be learned from a "client-side" statistics solution like [Plausible](https://plausible.io) with what can be learned from a "server-side" statistics solution like [GoAccess](https://goaccess.io).
The reason I am not performing this comparative experiment right now is because both solutions above manage to not support a single common log format. It seems it was decided a month or so ago that [GoAccess should conform to Caddy's format](https://github.com/allinurl/goaccess/issues/1768#issuecomment-629652452) ([separate issue on Caddy's side](https://github.com/caddyserver/caddy/issues/3417#issuecomment-629836804)). Until that happens (or until I figure out a way to parse Caddy's log format in GoAccess), this comparative experiment will have to wait.

+ 27
- 0
content/projects/git4db.md View File

@ -0,0 +1,27 @@
---
title: "Git for databases"
status: idea
slug: git4db
date: "2020-05-23 00:22:25"
listed: true
---