Work-related tech I use privately
I use some of my work tools outside of work. Without really needing them.
FreeBSD and Linux
There is no better way to get to know a piece of software - especially as big as an operating system - than by using it whenever one can, and for various tasks and purposes. Fiddle, hack, go beyond what's necessary, achieve things that are not strictly required. Even if just for fun.
And when one learns an OS well, other OSs may start feeling less welcoming, sometimes even more hostile.
Commercial desktop operating systems require increasingly more attention by causing various distractions. I use FreeBSD with Linux in virtual machines as my daily driver.
Do I have all the nice stuff? Not all, but necessary things - yes. Do I have anything I don't want? Nope - no ads, unwelcome interface changes shipped with unrelated bug patches, no invasive phoning home, unsolicited files scanning, ongoing configurability deprivation nor other unpleasantries.
I don't need to apologize to my OS for not wanting something, pretend that I will do something later, avoid a question, forced to suggest that I'm willing to talk about it the next day, over and over. No means no. There's this mutual understanding and lack of need for little, temporary non-aggression pacts. There is no war going on, nor there's any feeling of defeat. My OS doesn't try to change my behavior for someone else's benefit.
Overall, it's somewhat crude but efficient and doesn't cause distractions. I don't need my driver to be fancy. I'm not Miss Daisy.
Fail2ban and blacklistd
Intrusion detection/prevention systems are the kind of software I would use even if no clients of mine would be using, to protect both publicly-exposed services and software, and my workstations.
On Linux I usually utilize Fail2ban with iptables or external firewall, on FreeBSD it's the built-in blacklistd which integrates nicely with the pf firewall.
Often just one of public systems is hit with an unsolicited vulnerability scan, brute force or other kind of attack detectable by Fail2ban. Fail2ban sends info on particular attack attempts to a hub. Hub then generates firewall-ready rules and, separately, lists of addresses and networks being large sources of abuse, for different kind of services. Separate lists are created with ongoing web attacks, mail attacks, SSH attacks and so on.
Other systems apply those firewall rules or use particular address lists to filter such unwanted traffic via access lists or on their respective proxies or CDNs.
In short, on top of an IDS (Intrusion Detection System), I built a blocklist to prevent certain attacks before they hit more servers and services.
Terraform and Ansible
Two more pieces of software which I would be using even if my customers wouldn't: Terraform for cloud deployments and Ansible for systems provisioning. Personal projects need disaster recovery, too.
In fact, I started using both Ansible and Terraform before having paid duties involving any of them. Later, I have introduced IaC (Infrastructure as Code) to several companies and organizations.
Prior to that I was provisioning systems with shell scripts, and cloud resources with provided web panels. That's no way to go with larger infrastructures or in case of a disaster. One position I held, required that all systems were to be manually "checklisted" by two engineers before going to production. Needless to say - this company was provisioning systems rather rarely and such requirement would not fly with modern cloud setups.
I keep my Terraform cloud deployment definitions and Ansible provisioning roles and playbooks up-to-date to always be able to quickly spin up a fleet of VMs when needed for some tests. Not all customers have proper lab/testing environments, sometimes obtaining such access takes time, and that unnecessarily disrupts the flow.
Apart from managing servers, I also find Ansible quite ok for provisioning software on my laptops. With this part automated, potentially problematic upgrades (like disk firmware) and OS reinstalls are less painful and recovery process requires little to no user attention.
A very valuable lesson that I learned regarding Terraform - and this was when I already held HashiCorp's certification - is that Terraform is only as good as used providers (Terraform name for modules/plugins responsible for talking to particular APIs) or at the very least - provider's ability to handle API errors. I'm looking at you, cloud vendor with liquid in it's name.
Postfix and Dovecot
For over 20 years now, I'm handling my own mail. Call me a mad man but that's my preferred way for multiple reasons, list of which doesn't fit this post.
Postfix was my go-to MTA (Mail Transport Agent, server to server mail) software since day one. Even when I worked with Exim on customers' systems. It may just be the most reliable piece of software I have ever used and you simply don't replace things that are this good.
As for IMAP, it's been Dovecot for years. It's even handling my 465/587 traffic (submission(s) - client sending mail) as a proxy to Postfix.
To me, it's the most ridiculous thing on this list. I use Kubernetes only to keep up-to-date with its updates and to test stuff. It's my training ground, lab and I build and test stuff on it before proposing or shipping to clients. And it's always on.
I host the "bare" Kubernetes version on a dedicated server instead of using cloud-provided managed SaaS option for a few reasons:
- to be able to test upgrades of Kubernetes itself or existing workflows on new k8s versions, prepare for deprecations,
- to not be time-pressured by a cloud operator adding every hour of use up to my billing. Working with metered option tends to negatively impact my flow and the dedicated box in question is in use and had spare resources.
I host some projects for myself, both self-built and open source self-hosted alternatives to other services. Just one of these projects is on Kubernetes. And only for monitoring purposes: liveness and readiness probes, 3rd-party monitoring, etc.
Total overkill for such a tiny project built for own use. But it serves me well for development of DevOps-y stuff: as a target for monitoring and automation of disaster recovery actions, scalability and high-availability tests, and so I can be a user sometimes, and don't lose developers' perspective.
I mis-use Kubernetes
Yep - I said it. I mis-use k8s: all its nodes reside as virtual machines on one physical server. That's bad because would this single server go offline, my whole Kubernetes cluster would go down as well. A major no-no for production stuff or even busy dev work. But as it hosts no prod, nothing crucial - I don't mind.
Nodes from outside this single dedicated server are being attached only for HA/failover tests, multi-cloud performance tests and stuff like that.
And so, I use k8s wrong but this kind of wrong suits my needs.
Currently, I self-host Jenkins but sometimes it's GitLab or ArgoCD or other CI/CD tool if a client uses it. Or several solutions, when I'm tasked with re-writing pipelines from one to another.
As I'm not a developer, I have little use for CI/CD pipelines in my private life. But, just as with Kubernetes, I keep CI/CD utils which my clients currently use. To be up to date, test updates, plugins and various solutions, do basic stuff and experience it as a user instead of just as an admin.
Even though my Vim text editor is equipped with appropriate linters and other inspection tools, I run several of them in pipelines, in the currently used CI/CD tool.
I publish this blog from Git repos via CI/CD pipeline. I wrote about the process but the crucial part: there are easier ways. One quite similar - using public Git service provider with automation tools. The other - dedicated web panel for SSG administration - outputs an interface more friendly to non-technical users, much like WordPress or other Content Management System.
I intentionally don't use such solutions, but a rather convoluted CI/CD approach instead. Even out of the box, this approach appears to have stronger authentication security - I have more trust in SSH and Git's HTTPS auth then in WordPress authentication mechanisms.
It's all... forced and overcomplicated but for me, a single advantage outweighs the fact that I'm doing seemingly unnecessary work. It's the training and being up-to-date.
Could I spend this time better? Perhaps. On learning new stuff, maybe unrelated to work. Or having fun. But to think about it... this is fun for me :)
I manage several Git servers due to technical and per-client requirements, and to separate parts of work. And a private one, which holds culinary recipes, fermentation logs, ideas, various notes, unfinished texts, etc.
I usually use plain Git but as I mentioned - sometimes it's GitLab. And sometimes Gogs or Gitea, when - in addition to SSH-backed Git access - a simple web interface is suitable.
My go-to HTTP server and proxy, rarely replaced by something else. I've used nginx since it was shipped without English documentation yet. Even then, barely educated trial and error config changes were less painful than dealing with some changes between Apache httpd versions. And nginx allowed game-changing configuration reloading.
There are some exceptions to my "just use nginx" rule though. For example, Traefik ingress controller was my choice for the aforementioned Kubernetes cluster, over nginx ingress controller.
nginx is yet another example of software which I would be using even if my clients wouldn't. But they mostly do.
Do I have any need for a CI/CD tool and pipelines in my private life? Not really.
Git server? Well, I do store a lot of even tech-unrelated things in Markdown files which Git is a perfect storage for. But I could use GitLab, GitHub or other public offering, even free of charge.
Do I need Postfix and Dovecot? I want to keep them but don't really need them and they could be replaced with Proton Mail, Tutanota or a shared hosting solution.
Shared hosting options could also replace the need for having nginx and some offer Git servers as well.
I don't need most of this tech in my private life. In fact, I don't need any of it - nothing on this list is essential for me to live. But I like having it for training, as I believe it makes me a better engineer.
At some points in my career, this approach was far from cost-effective. But helped me move forward.