<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Sayak's Blog]]></title><description><![CDATA[Overengineering everything]]></description><link>https://sayakm.me/</link><generator>Ghost 5.85</generator><lastBuildDate>Sun, 19 Apr 2026 20:11:53 GMT</lastBuildDate><atom:link href="https://sayakm.me/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Getting started with developing Kubernetes on Windows]]></title><description><![CDATA[<p>Kubernetes, often referred to as K8s, is an open-source container orchestration platform that helps users deploy, scale and manage containerised workloads. It has become ubiquitous with cloud-native technologies and after over a decade of development has one of the most thriving open source communities. Contributions to the project are deployed</p>]]></description><link>https://sayakm.me/getting-started-with-developing-kubernetes-on-windows/</link><guid isPermaLink="false">6705487f0e35df6431a5991c</guid><category><![CDATA[Sysadmin]]></category><category><![CDATA[Programming]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Tue, 08 Oct 2024 18:54:19 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1667372459510-55b5e2087cd0?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDZ8fGt1YmVybmV0ZXN8ZW58MHx8fHwxNzI4MzIyNDI2fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1667372459510-55b5e2087cd0?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDZ8fGt1YmVybmV0ZXN8ZW58MHx8fHwxNzI4MzIyNDI2fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Getting started with developing Kubernetes on Windows"><p>Kubernetes, often referred to as K8s, is an open-source container orchestration platform that helps users deploy, scale and manage containerised workloads. It has become ubiquitous with cloud-native technologies and after over a decade of development has one of the most thriving open source communities. Contributions to the project are deployed on thousands of systems around the world and the first step to contributing is setting up the local development environment.</p><p>Now, Kubernetes is a pretty large project and setting up the development environment might not be trivial. Still, the Kubernetes community has provided a <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/development.md?ref=sayakm.me">helpful resource</a> which walks through the steps to get a development environment setup. But like many other projects in the cloud native space, it&apos;s very much Linux centric. Windows users like me might feel a bit lost.</p><p>But, it&apos;s not too complicated if you have WSL2. Developing on Windows directly is not worth it as the projects rely on a lot of tooling that Windows doesn&apos;t have. Yes, one could use MinGW or an alternative but it&apos;s not worth the hassle that might crop up later. Hence developing on WSL2 is the best option. If you are already using WSL2, you might already have a distribution installed. Although, one can start setting up the development environment in the same distribution. My advice is to create a separate distribution for K8s development. This will ensure that the development environment is isolated and is not impacted by other uses of WSL2.</p><h2 id="setup-the-wsl2-distro">Setup the WSL2 Distro</h2><p>The first step is to create the WSL2 distribution that will contain the development environment. This step can be performed in multiple ways depending on your existing setup.</p><ol><li>If you have never used WSL2 before, see the <a href="https://learn.microsoft.com/en-us/windows/wsl/install?ref=sayakm.me">documentation</a> for instructions on how to get started.</li><li>If you have used WSL2 before, you probably have an existing distribution already installed. In that case, install a new distribution from the store.<ol><li>If you have already installed the distribution you want earlier, you will not be able to create another instance. In that case, you will need to download the distribution and install it manually.</li><li>If you haven&apos;t already installed the distribution, my suggestion is to install the distribution and export it. Then re-import the distribution with a new name and uninstall the distribution that was installed from the store. This way, this distribution can be re-installed in the future.</li></ol></li><li>Once the distribution is up and running, ensure you are logged in as a non-root user.</li></ol><p>Going forward, we are using Ubuntu 24.04.1.</p><h2 id="setup-docker-desktop">Setup Docker Desktop</h2><p>K8s also need Docker to be present inside the WSL2 instance. The easiest way to do this is to install Docker Desktop in Windows and enable integration with the WSL distribution. See the <a href="https://docs.docker.com/desktop/wsl/?ref=sayakm.me#enabling-docker-support-in-wsl-2-distros">documentation</a> for instructions.</p><h2 id="setup-the-wsl2-environment">Setup the WSL2 Environment</h2><p>Now it&apos;s time to login to the WSL2 distribution and set it up.</p><ol>
<li>
<p>We will start by installing the GNU tools.</p>
<pre><code class="language-sh">sudo apt update
sudo apt install build-essential
</code></pre>
</li>
<li>
<p>Next, we will install <code>jq</code>.</p>
<pre><code class="language-sh">sudo apt install jq
</code></pre>
</li>
<li>
<p>Since Kubernetes is written in Go, we need to install it. It&apos;s important that the correct version of Go is installed depending on the version of K8s that going to be built. See the <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/development.md?ref=sayakm.me#go">notes in the community resource</a> on how to find that out. Install the required version of Go by following the <a href="https://go.dev/doc/install?ref=sayakm.me">instructions</a>.</p>
</li>
<li>
<p>K8s uses <code>pyyaml</code> for some verification tests. It needs to be installed using Python but before that, the installed version of Python needs to be configured.</p>
<pre><code class="language-sh">sudo apt install python3-pip
sudo apt install python3-venv

python3 -m venv .k8s-venv
source .k8s-venv/bin/activate
</code></pre>
<p>This installs <code>pip</code> and <code>venv</code> for the Python that comes pre-installed. It also creates and activates a virtual environment. This is needed as the Python that is pre-installed recommends to install all packages in a virtual environment instead of globally. Once that&apos;s done, we can go ahead and install <code>pyyaml</code>.</p>
<pre><code class="language-sh">pip install pyyaml
</code></pre>
</li>
<li>
<p>Finally time to clone the Kubernetes git repository. Make sure you fork the repository first before cloning it.</p>
<pre><code class="language-sh">git clone git@github.com:&lt;username&gt;/kubernetes.git
cd kubernetes
</code></pre>
<p>Make sure to replace <code>&lt;username&gt;</code> with your the GitHub username where the fork is located.</p>
</li>
<li>
<p>Finally, we install <code>etcd</code>.</p>
<pre><code class="language-sh">./hack/install-etcd.sh
</code></pre>
<p>This script will instruct you to make a change to your <code>PATH</code>. To make<br>
this permanent, add this to your <code>.bashrc</code> or login script:</p>
<pre><code class="language-sh">export PATH=&quot;$PATH:/home/&lt;username&gt;/kubernetes/third_party/etcd&quot;
</code></pre>
<p>Again replace <code>&lt;username&gt;</code> with your current WSL username.</p>
</li>
</ol>
<p>And that&apos;s it! You should have your development environment all setup. Try running the command <code>make</code> from the terminal and it should build the project.</p>]]></content:encoded></item><item><title><![CDATA[Exporting landscapes from Gaea to Unreal Engine 5]]></title><description><![CDATA[<p>Recently, while working on a game development project, I needed to create a landscape. I found the landscape tools in Unreal Engine 5 to be unsatisfactory for what I wanted to create. That is when I came across <a href="https://quadspinner.com/?ref=sayakm.me" rel="noreferrer">Gaea</a>, a node based terrain design tool. It had a free plan</p>]]></description><link>https://sayakm.me/gaea-exporting-landscapes-unreal/</link><guid isPermaLink="false">65d1b8ca977ef504896cc601</guid><category><![CDATA[GameDev]]></category><category><![CDATA[Unreal Engine]]></category><category><![CDATA[Gaea]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Sun, 18 Feb 2024 11:47:09 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1508592931388-95bc7b61033d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEyfHxub3J3YXl8ZW58MHx8fHwxNzA4MjU2NzI0fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1508592931388-95bc7b61033d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDEyfHxub3J3YXl8ZW58MHx8fHwxNzA4MjU2NzI0fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Exporting landscapes from Gaea to Unreal Engine 5"><p>Recently, while working on a game development project, I needed to create a landscape. I found the landscape tools in Unreal Engine 5 to be unsatisfactory for what I wanted to create. That is when I came across <a href="https://quadspinner.com/?ref=sayakm.me" rel="noreferrer">Gaea</a>, a node based terrain design tool. It had a free plan that was enough for me so I went ahead and downloaded it. I generated the landscape I was happy with, thanks to the help of the Gaea community, and then came across probably one of the most common questions people have with Gaea and Unreal Engine 5; How to export the terrain from Gaea into Unreal Engine 5 and scale correctly? I believe I have found a nice workflow so here&apos;s a short post detailing it.</p><h2 id="version-of-software-used">Version of software used</h2><ol><li>Gaea Version 1.3.2 Community</li><li>Unreal Engine 5.3.2</li></ol><h2 id="basic-points-to-keep-in-mind">Basic points to keep in mind</h2><p>Before we start, there are a <a href="https://docs.unrealengine.com/5.3/en-US/landscape-technical-guide-in-unreal-engine/?ref=sayakm.me" rel="noreferrer">few things</a> to keep in mind about Unreal Engine.</p><ol><li>The heightmap&apos;s height is calculated by using values between -256 and 255.992, stored with 16-bit precision. So, roughly, there is a range of 512 in the Z-axis.</li><li>The default scale is 100. This is because Unreal Engine&apos;s default units are in centimeter and expect the heightmap&apos;s values to be in metres. For eg. a value of 0 in the heightmap is -512cm when the scale is 1. Most heightmap&apos;s are made in metres and so, for things to make sense, the default is set to 100.</li><li>Unreal Engine has some pre-defined landscape sizes/resolutions that are not exactly 1024 or 2048. The list of recommended landscape sizes can be found <a href="https://docs.unrealengine.com/5.3/en-US/landscape-technical-guide-in-unreal-engine/?ref=sayakm.me" rel="noreferrer">here</a>. For our purpose in the rest of the blog, we will use 1009 x 1009.</li></ol><h2 id="basic-configuration-in-gaea">Basic configuration in Gaea</h2><p>Before you start working on a new Gaea project, set up the &quot;Terrain Definition&quot; parameters. These can be modified in an existing project too but that will change the terrain. The values should be as such:</p><p><strong>Scale</strong>: Use the landscape size that is selected. Since we selected 1009 x 1009 as the landscape size, we will use that as the scale.<br><strong>Height</strong>: For our height, we will use 512.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2024/02/image.png" class="kg-image" alt="Exporting landscapes from Gaea to Unreal Engine 5" loading="lazy" width="351" height="152"><figcaption><span style="white-space: pre-wrap;">Terrain Definition setup</span></figcaption></figure><h2 id="build-configuration-in-gaea">Build configuration in Gaea</h2><p>Now, once we have built our terrain, it&apos;s time to build and export the heightmap. We will use the following configuration:</p><p><strong>File format</strong>: .png<br><strong>Resolution</strong>: 1009<br><strong>Range</strong>: Raw</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2024/02/image-1.png" class="kg-image" alt="Exporting landscapes from Gaea to Unreal Engine 5" loading="lazy" width="351" height="959"><figcaption><span style="white-space: pre-wrap;">Build settings</span></figcaption></figure><p>And we will start building our heightmap. The built heightmap will be in our file system. We will import the <code>.png</code> file in Unreal Engine 5.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2024/02/image-2.png" class="kg-image" alt="Exporting landscapes from Gaea to Unreal Engine 5" loading="lazy" width="767" height="134" srcset="https://sayakm.me/content/images/size/w600/2024/02/image-2.png 600w, https://sayakm.me/content/images/2024/02/image-2.png 767w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">File system containing the heightmap</span></figcaption></figure><h2 id="landscape-import-configuration-in-unreal-engine-5">Landscape import configuration in Unreal Engine 5</h2><p>Finally, time to import the heightmap into Unreal Engine 5 and generate the landscape. Assuming we already have a level created, we will need to go to the Landscape mode. In the &quot;Manage&quot; mode, we can create a New landscape and select the &quot;Import from File&quot; mode.</p><p>We will use the following options in the fields:</p><p><strong>Heightmap File</strong>: The built <code>.png</code> file.<br><strong>Location</strong>: 0.0, 0.0, 25600.0<br><strong>Scale</strong>: 100.0, 100.0, 100.0<br><strong>Section Size</strong>: 63x63 Quads<br><strong>Sections Per Component</strong>: 1x1 Section<br><strong>Number of Components</strong>: 16, 16<br><strong>Overall Resolutions</strong>: 1009, 1009</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2024/02/image-4.png" class="kg-image" alt="Exporting landscapes from Gaea to Unreal Engine 5" loading="lazy" width="384" height="729"><figcaption><span style="white-space: pre-wrap;">Import options in Unreal Engine 5</span></figcaption></figure><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">Alternatively, one can also use <code spellcheck="false" style="white-space: pre-wrap;">2x2 Sections</code> in the &quot;Sections Per Component&quot; options while making the &quot;Number of Components&quot; as <code spellcheck="false" style="white-space: pre-wrap;">8, 8</code> to keep the &quot;Overall Resolution&quot; at <code spellcheck="false" style="white-space: pre-wrap;">1009, 1009</code>. This can reduce the number of &quot;Total Components&quot; and depends on the use case.</div></div><p>Click on &quot;Import&quot; to start the import. We should see the terrain imported with the correct scale and the bottom-most point of the terrain right on the XY plane.</p><p>And with that, we have what we wanted.</p><h2 id="i-want-a-hiiiigh-landscape">I want a hiiiigh landscape!</h2><p>So, in the above steps, I have mentioned keeping the &quot;Height&quot; in Gaea&apos;s &quot;Terrain Definition&quot; at 512. That means that the highest you can build is 512m. But what if you want to build higher? That should not be a problem. One can technically use whatever height Gaea supports. I have used 512m to keep things simple. If you want to use another height, just multiply the height by 100 and divide that by 512 and use the result in the Z-axis scale. For eg. if we set the height in Gaea at 1000, we should set the scale as 195.3125. This calculation comes from the fact that 512m in Gaea refers to a scale value of 100 in Unreal Engine. Moreover, the Z-axis value of the location needs to be updated in a similar ratio. To get the value multiply 256 with the Z-axis scale. In this case, we get 50000.0 so that&apos;s what we put to locate the bottom of the landscape at z=0.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2024/02/image-6.png" class="kg-image" alt="Exporting landscapes from Gaea to Unreal Engine 5" loading="lazy" width="748" height="261" srcset="https://sayakm.me/content/images/size/w600/2024/02/image-6.png 600w, https://sayakm.me/content/images/2024/02/image-6.png 748w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Height in Gaea to Scale in Unreal Engine</span></figcaption></figure><h2 id="using-a-sea-level">Using a sea level</h2><p>If the landscape in Gaea uses the Sea node to simulate a sea level, then one can import the landscape into Unreal Engine keeping the sea level at z=0. After getting the sea level in percentage from Gaea, subtract this as a ratio from 1 and then multiply by the value of the bottom of the landscape. For eg. in the previous case, if the sea level was 10%, then the location on the Z-axis would have been 50000 multiplied by 0.9 to get 45000.0. This would ensure that the sea level is at z=0.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2024/02/image-7.png" class="kg-image" alt="Exporting landscapes from Gaea to Unreal Engine 5" loading="lazy" width="1151" height="254" srcset="https://sayakm.me/content/images/size/w600/2024/02/image-7.png 600w, https://sayakm.me/content/images/size/w1000/2024/02/image-7.png 1000w, https://sayakm.me/content/images/2024/02/image-7.png 1151w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">How a sea node&apos;s value relates to Unreal Engine&apos;s location co-ordinates</span></figcaption></figure><p>That should be all. I hope it helps someone looking for a way to get the 2 working nicely. Let me know if something can be improved.</p>]]></content:encoded></item><item><title><![CDATA[Upgrading major versions of a multi instance Ghost setup]]></title><description><![CDATA[<p>I run a couple of blogs using Ghost and running on a Digital Ocean droplet and today I had to go through the arduous task of upgrading it. I am not exaggerating when I say arduous as it was anything but trivial. Initially, I started by upgrading the Ghost instances</p>]]></description><link>https://sayakm.me/upgrading-a-multi-instance-ghost-setup/</link><guid isPermaLink="false">65d0bc8b977ef504896cc3b4</guid><category><![CDATA[Sysadmin]]></category><category><![CDATA[Ghost]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Sat, 17 Feb 2024 18:48:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1583680599407-f73ab374fff4?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE0fHxnaG9zdHxlbnwwfHx8fDE3MDgxMzY1Mzl8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1583680599407-f73ab374fff4?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE0fHxnaG9zdHxlbnwwfHx8fDE3MDgxMzY1Mzl8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Upgrading major versions of a multi instance Ghost setup"><p>I run a couple of blogs using Ghost and running on a Digital Ocean droplet and today I had to go through the arduous task of upgrading it. I am not exaggerating when I say arduous as it was anything but trivial. Initially, I started by upgrading the Ghost instances in place but soon I realised that it was going to be more of a pain than I wanted. You see, I had set up my blog more than 5 years back using the formerly 1-click installation of Ghost (now called marketplace image installation) and in my infinite wisdom neglected to upgrade the installation. And thus I found that it was still running on Ubuntu 18.04 and Node 14. So, I decided that instead of spending hours trying to upgrade from the latest v4 to v5 of Ghost and potentially ruining the installation and crying in a corner in misery, I could just spin up a droplet, install Ghost afresh, and restore a backup. A genius idea I must say!</p><p>But first some system information:</p><p>Source system:</p><ol><li>Ubuntu 18.04.6 with 1 vCPU and 1GB RAM</li><li>Ghost CLI 1.25.3 and Ghost 4.48.9</li><li>Themed with Casper 4.7.4</li></ol><p>Destination system:</p><ol><li>Ubuntu 22.04.3 with 1 vCPU and 2GB RAM</li><li>Ghost CLI 1.25.3 and Ghost 5.79.3</li><li>Themed with Casper 5.7.0</li></ol><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2754;</div><div class="kg-callout-text">I have noticed that I could not start a Ghost instance with 1GB RAM. My experience tells me that I should be able to run 2 blogs with 1GB RAM but I think that initialisation is a resource intensive process and needs a burst of memory. So, I created a droplet of 1GB RAM and scaled it to 2GB without scaling storage, so that once I completed the migration, I can scale it back down to 1GB.</div></div><p>I expected things to be an easy affair but soon I was proved wrong when I realised that the Digital Ocean marketplace image itself doesn&apos;t support multi instance Ghost installations.</p><p>After spending over 24 hours researching and bringing up and destroying droplets over and over again to get the results I wanted, I finally found out the perfect steps to create a multi instance setup when starting from a marketplace image and how to migrate an existing setup to the new one. Hopefully, this will save some time for someone.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">For consistency, I will use the term &quot;source blog&quot; to mean the blog that contents are getting migrated from and &quot;destination blog&quot; to mean the blog that contents are getting migrated to.</div></div><h2 id="backup-source-blogs">Backup Source Blogs</h2><p>Perform the following steps in the source droplet.</p><ol>
<li>Switch to the <code>ghost-mgr</code> user.<pre><code class="language-sh">sudo -i -u ghost-mgr
</code></pre>
</li>
<li>Go to each directory where Ghost is installed. For our purposes, we will assume <code>/var/www/sayak</code>. One can go to the directories by the following command.<pre><code class="language-sh">cd /var/www/sayak
</code></pre>
</li>
<li>Run the backup command using the ghost CLI in each directory.<pre><code class="language-sh">ghost backup
</code></pre>
This will generate a <code>.zip</code> file for each backed up directory.</li>
<li>Download the backups to a local system. We will need them shortly.</li>
<li>Turn off the droplet from the Digital Ocean control panel or run the following in the terminal to shut down the droplet<pre><code class="language-sh">poweroff
</code></pre>
</li>
<li>Take a snapshot of the droplet from Digital Ocean. Taking a snapshot means that in case something horrible happens, we can create a fresh droplet from the snapshot.</li>
<li>Turn on the droplet from the Digital Ocean control panel once the snapshot is done.</li>
</ol>
<div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">Do note that snapshots have their costs.</div></div><h2 id="change-the-url-of-the-source-blogs">Change the URL of the source blogs</h2><p>Once we start setting up the destination droplet, we will need to give the URLs of the blogs. This means we will need to change the DNS of our domains to the destination droplet&apos;s IP address which means we will no longer be able to access the source blogs.</p><p>However, it is helpful to have access to the source blogs even after the destination blogs are up and running. One can use both of them to compare differences and make small changes in the destination blogs as necessary. This is the case when the themes have breaking changes.</p><ol>
<li>Create a new DNS entry for the source blogs. In our case, we will make it <code>old.sayakm.me</code>.</li>
<li>SSH into the source droplet and switch to the <code>ghost-mgr</code> user.<pre><code class="language-sh">sudo -i -u ghost-mgr
</code></pre>
</li>
<li>Go to each directory where Ghost is installed. For our purposes, we will assume <code>/var/www/sayak</code>. One can go to the directories by the following command.<pre><code class="language-sh">cd /var/www/sayak
</code></pre>
</li>
<li>Run the ghost config command in each directory to update the URL in the configuration.<pre><code class="language-sh">ghost config url https://old.sayakm.me
</code></pre>
This will generate an output confirming the success.<pre><code>Successfully set &apos;url&apos; to &apos;https://old.sayakm.me&apos;
</code></pre>
</li>
<li>Next, run the setup command to update the Nginx and SSL configuration in each directory<pre><code class="language-sh">ghost setup nginx
</code></pre>
This will give an output as such<pre><code>&#x2714; Setting up Nginx
</code></pre>
And<pre><code class="language-sh">ghost setup ssl
</code></pre>
This will ask for your email which will be used to generate the SSL certificate only. Provide it which will give this output<pre><code>? Enter your email (For SSL Certificate) youremail@gmail.com
&#x2714; Setting up SSL
</code></pre>
</li>
<li>Finally, restart each Ghost installation using the following command<pre><code class="language-sh">ghost restart
</code></pre>
On success, we will get<pre><code>&#x2714; Restarting Ghost
</code></pre>
</li>
</ol>
<h2 id="setup-the-destination-blog">Setup the destination blog</h2><p>Now, it&apos;s time to setup the destination droplet and blogs.</p><ol>
<li>Create the destination droplet in Digital Ocean with the Ghost Marketplace image.</li>
<li>Once the droplet gets created, update the IP of the existing blog domains to the IPs of the new droplet.</li>
<li>Now access the droplet using SSH. The Ghost CLI will start initialising the droplet with the default Ghost installation. We don&apos;t want this installation but we do want some of the bootstrapping it does. So, we will provide our options carefully.</li>
<li>The configuration will pause to ask for the blog URL and SSL certificate email. Provide a generic URL that is not controlled as we don&apos;t need an SSL certificate to be generated. Here&apos;s what the output will look like. Of course, the SSL step will fail. That&apos;s expected and fine.<pre><code class="language-sh">&#x2714; Checking system Node.js version - found v18.17.1
&#x2714; Checking current folder permissions
&#x2714; Checking memory availability
&#x2714; Checking free space
&#x2714; Checking for latest Ghost version
&#x2714; Setting up install directory
&#x2714; Downloading and installing Ghost v5.79.3
&#x2714; Finishing install process
? Enter your blog URL: https://example.com
&#x2714; Configuring Ghost
&#x2714; Setting up instance
+ sudo useradd --system --user-group ghost
+ sudo chown -R ghost:ghost /var/www/ghost/content
&#x2714; Setting up &quot;ghost&quot; system user
&#x2714; Setting up &quot;ghost&quot; mysql user
+ sudo mv /tmp/example-com/example.com.conf /etc/nginx/sites-available/example.com.conf
+ sudo ln -sf /etc/nginx/sites-available/example.com.conf /etc/nginx/sites-enabled/example.com.conf
+ sudo nginx -s reload
&#x2714; Setting up Nginx
? Enter your email (For SSL Certificate) youremail@gmail.com
+ sudo mkdir -p /etc/letsencrypt
+ sudo ./acme.sh --install --home /etc/letsencrypt
+ sudo /etc/letsencrypt/acme.sh --issue --home /etc/letsencrypt --server letsencrypt --domain example.com --webroot /var/www/ghost/system/nginx-root --reloadcmd &quot;nginx -s reload&quot; --accountemail youremail@gmail.com --keylength 2048
&#x2716; Setting up SSL
+ sudo mv /tmp/example-com/ghost_example-com.service /lib/systemd/system/ghost_example-com.service
+ sudo systemctl daemon-reload
&#x2714; Setting up Systemd
+ sudo systemctl is-active ghost_example-com
+ sudo systemctl start ghost_example-com
+ sudo systemctl is-enabled ghost_example-com
+ sudo systemctl enable ghost_example-com --quiet
&#x2714; Starting Ghost
One or more errors occurred.
</code></pre>
</li>
<li>A directory <code>/var/www/ghost</code> will be created. Get into the directory using<pre><code class="language-sh">cd /var/www/ghost
</code></pre>
</li>
<li>We get the configuration details of this installation using<pre><code class="language-sh">cat config.production.json
</code></pre>
The output will look like the following<pre><code class="language-json">{
  &quot;url&quot;: &quot;https://example.com&quot;,
  &quot;server&quot;: {
    &quot;port&quot;: 2368,
    &quot;host&quot;: &quot;127.0.0.1&quot;
  },
  &quot;database&quot;: {
    &quot;client&quot;: &quot;mysql&quot;,
    &quot;connection&quot;: {
      &quot;host&quot;: &quot;127.0.0.1&quot;,
      &quot;user&quot;: &quot;ghost-69&quot;,
      &quot;password&quot;: &quot;thepassword&quot;,
      &quot;port&quot;: 3306,
      &quot;database&quot;: &quot;ghost_production&quot;
    }
  },
  &quot;mail&quot;: {
    &quot;transport&quot;: &quot;Direct&quot;
  },
  &quot;logging&quot;: {
    &quot;transports&quot;: [
      &quot;file&quot;,
      &quot;stdout&quot;
    ]
  },
  &quot;process&quot;: &quot;systemd&quot;,
  &quot;paths&quot;: {
    &quot;contentPath&quot;: &quot;/var/www/ghost/content&quot;
  }
}
</code></pre>
Note the property of <code>database.connection.host</code>, <code>database.connection.user</code> and <code>database.connection.password</code>.</li>
<li>Switch to the <code>ghost-mgr</code> user.<pre><code class="language-sh">sudo -i -u ghost-mgr
</code></pre>
</li>
<li>Go to each directory where Ghost is installed using the following command.<pre><code class="language-sh">cd /var/www/ghost
</code></pre>
</li>
<li>Next, uninstall this Ghost instance using<pre><code class="language-sh">ghost uninstall
</code></pre>
This will result in the following output<pre><code>? Are you sure you want to do this? Yes
+ sudo systemctl is-active ghost_example-com
+ sudo systemctl stop ghost_example-com
+ sudo systemctl is-enabled ghost_example-com
+ sudo systemctl disable ghost_example-com --quiet
&#x2714; Stopping Ghost
+ sudo rm -rf /var/www/ghost/content
&#x2714; Removing content folder
+ sudo rm -f /etc/nginx/sites-available/example.com.conf
+ sudo rm -f /etc/nginx/sites-enabled/example.com.conf
+ sudo nginx -s reload
+ sudo rm /lib/systemd/system/ghost_example-com.service
&#x2714; Removing related configuration
&#x2714; Removing Ghost installation
</code></pre>
This will also mean that all contents of the <code>/var/www/ghost</code> will be removed</li>
</ol>
<p>Next, we will need to create directories for each of our blogs. The following steps will need to be followed for each of the blog instances. I will note only the first one.</p><ol>
<li>
<p>Ensure you are in <code>root</code> user.</p>
</li>
<li>
<p>Create a directory for the Ghost instance that&apos;s going to be created.</p>
<pre><code class="language-sh">cd /var/www
mkdir sayak
</code></pre>
</li>
<li>
<p>Modify the permissions of this directory to ensure that the <code>ghost-mgr</code> user can access it.</p>
<pre><code class="language-sh">chown ghost-mgr:ghost-mgr /var/www/sayak
chmod 775 /var/www/sayak
</code></pre>
</li>
<li>
<p>Before we install Ghost, we will need to ensure that the database it needs is already present. The Ghost startup creates a database called <code>ghost_production</code> for the pre-created instance. For new instances, this is not automatically created but we will use that database as a template for our new database. We will need to login to MySQL as the root user and for that, we need that password.</p>
<p>Thankfully, it&apos;s already present in the file <code>/root/.digitialocean_password</code>. Once we get the password from the file, we will login to MySQL as the <code>root</code> user.</p>
<pre><code class="language-sh">cat /root/.digitialocean_password
</code></pre>
<p>This will display the password. Copy it to the clipboard.</p>
<pre><code class="language-sh">mysql -p
</code></pre>
<p>This will prompt for the password so paste it from the clipboard.<br>
This should get you logged into MySQL&apos;s console.</p>
</li>
<li>
<p>First, we will check what databases are present</p>
<pre><code class="language-sql">SHOW DATABASES;
</code></pre>
<p>We should get an output like</p>
<pre><code>+--------------------+
| Database           |
+--------------------+
| ghost_production   |
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
</code></pre>
</li>
<li>
<p>We will also check what users are present and what kind of grants are provided to the <code>ghost-69</code> user that we found from the <code>database.connection.user</code> property of the <code>config.production.json</code> file.</p>
<pre><code class="language-sql">SELECT user FROM mysql.user;
</code></pre>
<p>We should get an output like</p>
<pre><code>+------------------+
| user             |
+------------------+
| ghost-69         |
| root             |
| debian-sys-maint |
| mysql.infoschema |
| mysql.session    |
| mysql.sys        |
| root             |
+------------------+
</code></pre>
<p>Run the following command to check the grants provided to the user <code>ghost-69</code> using the host found in the <code>database.connection.host</code> property of the <code>config.production.json</code> file.</p>
<pre><code class="language-sql">SHOW GRANTS FOR &apos;ghost-69&apos;@&apos;127.0.0.1&apos;;
</code></pre>
<p>which will result in an output of</p>
<pre><code>+------------------------------------------------------------------------+
| Grants for ghost-69@127.0.0.1                                          |
+------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `ghost-69`@`127.0.0.1`                           |
| GRANT ALL PRIVILEGES ON `ghost_production`.* TO `ghost-69`@`127.0.0.1` |
+------------------------------------------------------------------------+
</code></pre>
</li>
<li>
<p>Since the directory we created in <code>/var/www</code> was <code>sayak</code>, we will create a database named <code>sayak_prod</code> as that is the default naming convention of Ghost. We will also grant privileges to the user <code>ghost-69</code>.</p>
<pre><code class="language-sql">CREATE DATABASE sayak_prod;
GRANT ALL PRIVILEGES ON sayak_prod.* TO &apos;ghost-69&apos;@&apos;127.0.0.1&apos;;
</code></pre>
<p>We will verify by running</p>
<pre><code class="language-sql">SHOW GRANTS FOR &apos;ghost-69&apos;@&apos;127.0.0.1&apos;;
</code></pre>
<p>which will result in an output of</p>
<pre><code>+------------------------------------------------------------------------+
| Grants for ghost-69@127.0.0.1                                          |
+------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `ghost-69`@`127.0.0.1`                           |
| GRANT ALL PRIVILEGES ON `ghost_production`.* TO `ghost-69`@`127.0.0.1` |
| GRANT ALL PRIVILEGES ON `sayak_prod`.* TO `ghost-69`@`127.0.0.1`       |
+------------------------------------------------------------------------+
</code></pre>
</li>
<li>
<p>Finally, we can exit MySQL</p>
<pre><code class="language-sql">exit
</code></pre>
</li>
</ol>
<p>Finally, we can get around to installing Ghost.</p><ol>
<li>Switch to the <code>ghost-mgr</code> user.<pre><code class="language-sh">sudo -i -u ghost-mgr
</code></pre>
</li>
<li>Go to the directory where Ghost is to be installed.<pre><code class="language-sh">cd /var/www/sayak
</code></pre>
</li>
<li>Finally, install Ghost<pre><code class="language-sh">ghost install
</code></pre>
There will be prompts for multiple questions<pre><code class="language-sh">&#x2714; Checking system Node.js version - found v18.17.1
&#x2714; Checking current folder permissions
&#x2714; Checking memory availability
&#x2714; Checking free space
&#x2714; Checking for latest Ghost version
&#x2714; Setting up install directory
&#x2714; Downloading and installing Ghost v5.79.3
&#x2714; Finishing install process
? Enter your blog URL: https://sayakm.me
? Enter your MySQL hostname: 127.0.0.1
? Enter your MySQL username: sayak-69
? Enter your MySQL password: [hidden]
? Enter your Ghost database name: sayak_prod
&#x2714; Configuring Ghost
&#x2714; Setting up instance
+ sudo chown -R ghost:ghost /var/www/sayak/content
&#x2714; Setting up &quot;ghost&quot; system user
&#x2139; Setting up &quot;ghost&quot; mysql user [skipped]
? Do you wish to set up Nginx? Yes
+ sudo mv /tmp/sayakm-me/sayakm.me.conf /etc/nginx/sites-available/sayakm.me.conf
+ sudo ln -sf /etc/nginx/sites-available/sayakm.me.conf /etc/nginx/sites-enabled/sayakm.me.conf
+ sudo nginx -s reload
&#x2714; Setting up Nginx
? Do you wish to set up SSL? Yes
? Enter your email (For SSL Certificate) mukhopadhyaysayak@gmail.com
+ sudo /etc/letsencrypt/acme.sh --upgrade --home /etc/letsencrypt
+ sudo /etc/letsencrypt/acme.sh --issue --home /etc/letsencrypt --server letsencrypt --domain sayakm.me --webroot /var/www/sayak/system/nginx-root --reloadcmd &quot;nginx -s reload&quot; --accountemail mukhopadhyaysayak@gmail.com --keylength 2048
+ sudo mv /tmp/sayakm-me/sayakm.me-ssl.conf /etc/nginx/sites-available/sayakm.me-ssl.conf
+ sudo ln -sf /etc/nginx/sites-available/sayakm.me-ssl.conf /etc/nginx/sites-enabled/sayakm.me-ssl.conf
+ sudo nginx -s reload
&#x2714; Setting up SSL
? Do you wish to set up Systemd? Yes
+ sudo mv /tmp/sayakm-me/ghost_sayakm-me.service /lib/systemd/system/ghost_sayakm-me.service
+ sudo systemctl daemon-reload
&#x2714; Setting up Systemd
+ sudo systemctl is-active ghost_sayakm-me
? Do you want to start Ghost? Yes
+ sudo systemctl start ghost_sayakm-me
+ sudo systemctl is-enabled ghost_cmdrgarud-blog
+ sudo systemctl enable ghost_cmdrgarud-blog --quiet
&#x2714; Starting Ghost

Ghost was installed successfully! To complete setup of your publication, visit:

    https://sayakm.me/ghost/
</code></pre>
</li>
<li>Finally, verify that the blog is running<pre><code class="language-sh">ghost ls
</code></pre>
which should give an output of<pre><code>&#x250C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x252C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x252C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x252C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x252C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x252C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x252C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2510;
&#x2502; Name           &#x2502; Location           &#x2502; Version &#x2502; Status               &#x2502; URL                    &#x2502; Port &#x2502; Process Manager &#x2502;
&#x251C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x253C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x253C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x253C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x253C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x253C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x253C;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2524;
&#x2502; sayakm-me      &#x2502; /var/www/sayak     &#x2502; 5.79.3  &#x2502; running (production) &#x2502; https://sayakm.me      &#x2502; 2369 &#x2502; systemd         &#x2502;
&#x2514;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2534;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2534;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2534;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2534;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2534;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2534;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2500;&#x2518;
</code></pre>
</li>
</ol>
<p>And with that, the destination Ghost blog is up and running. Now to follow the above steps for each blog you need to migrate!</p><h2 id="importing-from-backup">Importing from backup</h2><p>Now wait! We have the destination blog running but we are yet to transfer the data. That&apos;s a simpler part and is already documented by the folks at Ghost but for the sake of completeness, I will reiterate them.</p><ol>
<li>Copy the backups present in the local system to a location in the destination droplet.</li>
<li>Ensure you are in the Ghost installation directory, for eg. <code>/var/www/sayak</code>. If not, head to it using<pre><code class="language-sh">cd /var/www/sayak
</code></pre>
</li>
<li>Unzip the backups to the content directory and provide permissions to the <code>ghost</code> user to access all the contents within it.<pre><code class="language-sh">sudo unzip /location-of-backup-zip-file.zip -d content
sudo chown -R ghost:ghost content
</code></pre>
</li>
<li>Switch to the <code>ghost-mgr</code> user<pre><code class="language-sh">sudo -i -u ghost-mgr
</code></pre>
</li>
<li>Go to the directory where Ghost is installed.<pre><code class="language-sh">cd /var/www/ghost
</code></pre>
</li>
<li>Restart the Ghost instance to load all the changes<pre><code class="language-sh">ghost restart
</code></pre>
</li>
<li>Now, we will need to restore the data to the database. We can find a <code>json</code> file in <code>/var/www/sayak/content/data</code>. This will be the source of the restore. To restore run<pre><code class="language-sh">ghost import content/data/the-json-file.json
</code></pre>
This will ask for an admin password. Provide the password that was used for the admin account in the source blog.</li>
<li>Restart the Ghost instance again to ensure database changes are loaded.<pre><code class="language-sh">ghost restart
</code></pre>
</li>
<li>Open the Ghost web admin portal and one should see all their content and settings.</li>
</ol>
<h2 id="updating-profile-images">Updating profile images</h2><p>This step should not be needed but looks like for some reason, the admin profile image and the cover image aren&apos;t getting restored. The trivial way to fix it is to upload the images again in the web admin portal but I am nitpicky and don&apos;t want to re-upload images already present in the server. And yes, I checked, the files are there but the database entries are empty. So, we are going to fix that!</p><p>For this, we need a few info.</p><ol><li>The MySQL ghost username. We have been using <code>ghost-69</code> so that&apos;s what it will be.</li><li>The Password for the above username. We have used <code>thepassword</code> as an example.</li><li>The MySQL hostname. It&apos;s <code>127.0.0.1</code> as noted earlier.</li><li>In the <code>json</code> file present in <code>/var/www/sayak/content/data</code>, need to get the values of <code>profile_image</code> and <code>cover_image</code> from <code>db.data.users</code> key. The values should start with <code>https://sayakm.me/content/images</code>. Either or both can even be <code>null</code>. If both are <code>null</code> then the following steps don&apos;t need to be followed and you are done!</li></ol><p>Once we have them, we can go ahead with updating the database.</p><ol>
<li>Login to MySQL as the <code>ghost-69</code> user.<pre><code class="language-sh">mysql -u &apos;ghost-69&apos; -h &apos;127.0.0.1&apos; -p
</code></pre>
When prompted provide the password.</li>
<li>Activate the <code>sayak_prod</code> database.<pre><code class="language-sql">USE sayak_prod;
</code></pre>
</li>
<li>We will check the <code>users</code> table to verify the problem<pre><code class="language-sql">SELECT id, name, profile_image, cover_image FROM users;
</code></pre>
This should give an output like<pre><code>+----+-------+---------------+-------------+
| id | name  | profile_image | cover_image |
+----+-------+---------------+-------------+
| 1  | Sayak | NULL          | NULL        |
+----+-------+---------------+-------------+
</code></pre>
Those 2 fields should not be <code>NULL</code> if you had profile and cover images earlier.</li>
<li>To fix it, we will run the following command<pre><code class="language-sql">UPDATE users SET profile_image = &apos;value-from-the-profile_image-key&apos;, cover_image = &apos;value-from-the-cover_image-key&apos; WHERE id=1;
</code></pre>
If one of <code>profile_image</code> or <code>cover_image</code> in the <code>JSON</code> file is <code>null</code>, don&apos;t include that in the above SQL. For eg. if <code>profile_image</code> is not <code>null</code> but <code>cover_image</code> is <code>null</code> in the <code>JSON</code> file, the SQL command should instead be<pre><code class="language-sql">UPDATE users SET profile_image = &apos;value-from-the-profile_image-key&apos; WHERE id=1;
</code></pre>
</li>
</ol>
<p>And with that, we are finally done! Your blog should be ready to use with everything as it was before, with a few caveats and oddities.</p><h2 id="caveats-and-oddities">Caveats and oddities</h2><ol><li>Your theme might reset to whatever Ghost considers default. Just changing back to the earlier theme should be enough to fix that.</li><li>The admin panel might go back to light mode if you were a dark mode user.</li><li>If the theme has undergone a major change, prepare for things to break, especially if you have injected scripts and styles.</li></ol><p>And well, that&apos;s all folks! I hope this saves some time for future folks. Remember, this is what my experience has been and Ghost moves fast and things mentioned here might not work in the future. Still, I would be happy to improve, just tweet me, or was it X me&#x1F914;.</p>]]></content:encoded></item><item><title><![CDATA[Trying my hand at game dev]]></title><description><![CDATA[<p>The last few years have been been draining. The constant lockdowns along with the persistent existential threat sucked all my enthusiasm out of me. Nothing felt interesting anymore, not even gaming, one of the few joys of my life. I tried my hand at a lot of things, but nothing</p>]]></description><link>https://sayakm.me/trying-my-hand-at-game-dev/</link><guid isPermaLink="false">65d0b550bc9457aff61c8ecb</guid><category><![CDATA[GameDev]]></category><category><![CDATA[Programming]]></category><category><![CDATA[Unreal Engine]]></category><category><![CDATA[Blender]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Sun, 18 Sep 2022 13:09:06 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1556438064-2d7646166914?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGdhbWVkZXZ8ZW58MHx8fHwxNzA4MjU3OTk1fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1556438064-2d7646166914?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGdhbWVkZXZ8ZW58MHx8fHwxNzA4MjU3OTk1fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Trying my hand at game dev"><p>The last few years have been been draining. The constant lockdowns along with the persistent existential threat sucked all my enthusiasm out of me. Nothing felt interesting anymore, not even gaming, one of the few joys of my life. I tried my hand at a lot of things, but nothing was interesting enough to keep me at it for long. </p><figure class="kg-card kg-image-card"><img src="https://sayakm.me/content/images/2022/09/Screenshot_12.png" class="kg-image" alt="Trying my hand at game dev" loading="lazy" width="237" height="236"></figure><p>That is until I decided to try my hand at game development.</p><p>Now, I am a professional developer so writing code is no big thing for me. And thinking that I dove right into the murky waters. A few years back, I had tried using Unity and even made a couple of prototypes, but nothing fancy. This time, I decided that one way of keeping things interesting is to have a goal. And the goal has to be good enough to keep working towards it. So, I decided to work on a game idea that I had, slightly complicated but interesting nonetheless. And felt the Unreal Engine would be a better choice for something serious.</p><p>First, I started by researching on the tools I would need. I downloaded Unreal Engine 5 and Blender. I had some experience with Blender but that was around 9 years ago. I also downloaded Krita for designing the props. With the tools there, I started with my first Blueprint. I imported a mannequin from the UE store and realised that if I wanted to animate them, I need a gun.</p><p>So, off I went to design a carbine! And that is when I came to know about Hard Surface workflows in Blender. And it brought me such happiness! Learning about the non destructive workflows for modelling reminded me about all the CAD software I had used in the past (I have a background in Mechanical Engineering).</p><figure class="kg-card kg-image-card"><img src="https://sayakm.me/content/images/2022/09/Untitled2-2.png" class="kg-image" alt="Trying my hand at game dev" loading="lazy" width="229" height="235"></figure><p>But that joy was short lived as I soon found all the difficulties around using only booleans. The biggest pain being bevelling. Applying a bevel to an edge that underwent a boolean operation is possible, but I can&apos;t modify the edge weight of such an edge. And it has been making me mad. I keep thinking how easy it would have been in a CAD program like an ex I can&apos;t forget about.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2022/09/Screenshot-2022-09-18-181539.png" class="kg-image" alt="Trying my hand at game dev" loading="lazy" width="1922" height="1033" srcset="https://sayakm.me/content/images/size/w600/2022/09/Screenshot-2022-09-18-181539.png 600w, https://sayakm.me/content/images/size/w1000/2022/09/Screenshot-2022-09-18-181539.png 1000w, https://sayakm.me/content/images/size/w1600/2022/09/Screenshot-2022-09-18-181539.png 1600w, https://sayakm.me/content/images/2022/09/Screenshot-2022-09-18-181539.png 1922w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">The edges cut with a boolean cutter can&apos;t be given weights without applying the boolean</span></figcaption></figure><p>But, I really don&apos;t want to give up and start modelling in a CAD right now. I really want to do this the &quot;right&quot; way and it seems like no one uses a CAD for game development. So, once I find the resolve to apply the cutter, I will do it and then bevel manually.</p><p>I don&apos;t think programming the game is going to be as challenging for me as the other tasks like modelling. This gun itself has already taken me over 2 weeks of modelling work and 2 weeks of designing in Krita. It seems like its going to take at least 2 weeks more before I am anywhere close to finish modelling it. Moreover, I am mostly working on this in my free time and weekends (I have a full time job!).</p><p>I am planning to blog about my progress in this game without giving too much away. This is the first post so expect more of them in the future.</p><p>Now back to figuring out if I can bevel that edge not destructively!</p>]]></content:encoded></item><item><title><![CDATA[Deploying a HA Kubernetes cluster on Raspberry Pi using Kubeadm]]></title><description><![CDATA[<div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2753;</div><div class="kg-callout-text">Before I even begin, please note that things change fast in the Kubernetes space and unless you are using the exact same versions as mentioned in this post, this post might not be suitable for you.</div></div><p>I have been working on Kubernetes at work for a little less than</p>]]></description><link>https://sayakm.me/deploying-a-ha-kubernetes-cluster-on-raspberry-pi-using-kubeadm/</link><guid isPermaLink="false">65d0b550bc9457aff61c8eca</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Sysadmin]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Thu, 23 Jun 2022 15:04:23 GMT</pubDate><media:content url="https://sayakm.me/content/images/2022/06/IMG_20211020_182351_copy-1.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2753;</div><div class="kg-callout-text">Before I even begin, please note that things change fast in the Kubernetes space and unless you are using the exact same versions as mentioned in this post, this post might not be suitable for you.</div></div><img src="https://sayakm.me/content/images/2022/06/IMG_20211020_182351_copy-1.jpg" alt="Deploying a HA Kubernetes cluster on Raspberry Pi using Kubeadm"><p>I have been working on Kubernetes at work for a little less than a year but my job mostly involved setting up a managed cluster by a cloud provider and deploying applications and tools on said clusters. I hadn&apos;t had the opportunity to set up a cluster by myself. But, I always wanted to setup a bare metal cluster one day.</p><p>Why bare metal, you say? Well, using VMs to set up a cluster didn&apos;t seem right as a lot of the networking would still be managed. I wanted to get my hands dirty with the difficult bits. This is where the Raspberry Pis come in. I had been eyeing to buy some pis to do a project for sometime but never got around it and this seemed like the perfect opportunity.</p><h2 id="getting-started">Getting Started</h2><h3 id="hardware-needed">Hardware needed</h3><ol>
<li>Raspberry Pi 4B 8GB - 6 units - <a href="#nr1">note 1</a></li>
<li>Power adapters for Raspberry Pis - 6 units - <a href="#nr2">note 2</a></li>
<li>Unmanaged switch with minimum 7 ports - <a href="#nr3">note 3</a></li>
<li>Ethernet cables - 6 units - <a href="#nr4">note 4</a></li>
<li>Ethernet cable - 1 unit - <a href="#nr5">note 5</a></li>
<li>6 layer Raspberry Pi rack - <a href="#nr6">note 6</a></li>
<li>32 GB microSD cards - 6 units</li>
<li>Keyboard - <a href="#nr7">note 7</a></li>
<li>Monitor - <a href="#nr7">note 7</a></li>
<li>HDMI to micro HDMI cable - <a href="#nr7">note 7</a></li>
<li>PC with microSD slot or microSD adapter</li>
</ol>
<h3 id="software-needed">Software needed</h3><ol>
<li>Raspberry Pi Imager - <a href="https://www.raspberrypi.com/software/?ref=sayakm.me">https://www.raspberrypi.com/software/</a></li>
<li>Windows Susbsystem for Linux (WSL2) - <a href="#nr8">note 8</a></li>
<li>Ubuntu 20.04.3-raspberry pi distribution  - <a href="#nr9">note 9</a></li>
</ol>
<h3 id="notes-regarding-the-above-requirements">Notes regarding the above requirements</h3><p><span id="nr1" class="link">1.</span> You can use models with 4GB of RAM too but anything below that is cutting it close.<br>
<span id="nr2" class="link">2.</span> You can instead use a 6 port USB power supply but ensure that it can provide a minimum of 91.8W. You will also need 6 units of USB cables with Type-C port for the Raspberry Pi side.<br>
<span id="nr3" class="link">3.</span> You can also use a managed switch but I can&apos;t guarantee the below steps will work.<br>
<span id="nr4" class="link">4.</span> Try to get short cables to have a compact solutions.<br>
<span id="nr5" class="link">5.</span> This cable will connect your router to the switch so get a size that works for you.<br>
<span id="nr6" class="link">6.</span> Or use multiple cases or keep them lying on the desk, doesn&apos;t really matter but having a cluster case can make the setup very compact.<br>
<span id="nr7" class="link">7.</span> You might not need to connect your Pi with a keyboard and monitor if you can directly SSH into it at first boot. You would need to have a PC anyway.<br>
<span id="nr8" class="link">8.</span> Not really needed if your primary system is not Windows, or if you are comfortable using ssh in Windows or want to use a better terminal.<br>
<span id="nr9" class="link">9.</span> Not needed to be downloaded manually.</p>
<h2 id="setup">Setup</h2><p>The 6 Raspberry Pis will be used for creating 3 control plane nodes and 3 worker nodes. All of them will be running <a href="https://releases.ubuntu.com/20.04/?ref=sayakm.me">Ubuntu 20.04.3</a> and will be using <a href="https://containerd.io/?ref=sayakm.me">containerd</a> as the container runtime. We will be using <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/?ref=sayakm.me">Kubeadm</a> to install Kubernetes in a <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/?ref=sayakm.me#stacked-etcd-topology">Highly Available, stacked etcd topology</a>.</p><h3 id="initialise-microsd-cards">Initialise microSD cards</h3><p>First, we will need to setup our microSD cards. For this, we will use the <a href="https://www.raspberrypi.com/software/?ref=sayakm.me">Raspberry PI Imager</a> and we will install Ubuntu 20.04.3 on all our microSD cards.</p><p>Insert each microSD card into your computer&apos;s card reader and do the following for each card.</p><p>Open the Raspberry Pi Imager software and click the <code>CHOOSE OS</code> button.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/11/New-Bitmap-image.jpg" class="kg-image" alt="Deploying a HA Kubernetes cluster on Raspberry Pi using Kubeadm" loading="lazy" width="677" height="447" srcset="https://sayakm.me/content/images/size/w600/2021/11/New-Bitmap-image.jpg 600w, https://sayakm.me/content/images/2021/11/New-Bitmap-image.jpg 677w"><figcaption><span style="white-space: pre-wrap;">Raspberry Pi Imager main screen</span></figcaption></figure><p>Then in the list, click on <code>Other General Purpose OS</code>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/11/Screenshot_3.png" class="kg-image" alt="Deploying a HA Kubernetes cluster on Raspberry Pi using Kubeadm" loading="lazy" width="676" height="445" srcset="https://sayakm.me/content/images/size/w600/2021/11/Screenshot_3.png 600w, https://sayakm.me/content/images/2021/11/Screenshot_3.png 676w"><figcaption><span style="white-space: pre-wrap;">OS selection screen</span></figcaption></figure><p>Select <code>Ubuntu</code> in the next screen.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/11/Screenshot_4.png" class="kg-image" alt="Deploying a HA Kubernetes cluster on Raspberry Pi using Kubeadm" loading="lazy" width="584" height="364"><figcaption><span style="white-space: pre-wrap;">List of General Purpose OSes</span></figcaption></figure><p>Then select <code>Ubuntu Server 20.04.3 LTS (RPI 3/4/400)</code>. Ensure that you are selecting the one marked <code>64-bit</code> and support for <code>arm64</code> architecture.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/11/Screenshot_5.png" class="kg-image" alt="Deploying a HA Kubernetes cluster on Raspberry Pi using Kubeadm" loading="lazy" width="445" height="285"><figcaption><span style="white-space: pre-wrap;">List of Ubuntu builds for Raspberry Pi</span></figcaption></figure><p>Then click on <code>CHOOSE SD CARD</code>. A window should popup listing in the removable drives inserted. Carefully choose the drive which maps to your microSD card. Then click on <code>WRITE</code>. Once the writing is completed, you can eject the microSD card and proceed with writing the next card.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">If you want to hot plug a monitor to your Pis, you will need to update some configuration on the microSD card. With the microSD card still connected to the PC, go to the drive (<code spellcheck="false" style="white-space: pre-wrap;">F:</code> in my case) and edit the file <code spellcheck="false" style="white-space: pre-wrap;">usercfg.txt</code> (so, in my case that would be <code spellcheck="false" style="white-space: pre-wrap;">F:\usercfg.txt</code>). This file can also be changed from the OS itself. It&apos;s located at <code spellcheck="false" style="white-space: pre-wrap;">/boot/firmware/usercfg.txt</code>. Ensure that it&apos;s as follow:</div></div><figure class="kg-card kg-code-card"><pre><code class="language-configuration"># Place &quot;config.txt&quot; changes (dtparam, dtoverlay, disable_overscan, etc.) in
# this file. Please refer to the README file for a description of the various
# configuration files on the boot partition.

hdmi_force_hotplug=1
hdmi_group=2
hdmi_mode=82
</code></pre><figcaption><p><span style="white-space: pre-wrap;">usercfg.txt</span></p></figcaption></figure><p>The values for <code>hdmi_group</code> and <code>hdmi_mode</code> might differ based on your needs. If you are connecting a monitor, use <code>hdmi_group=2</code>. If you are connecting a TV instead, use <code>hdmi_group=1</code>. To find out the <code>hdmi_mode</code> for yourself, check <a href="https://www.raspberrypi.com/documentation/computers/config_txt.html?ref=sayakm.me#hdmi_mode">https://www.raspberrypi.com/documentation/computers/config_txt.html#hdmi_mode</a></p><p>When you have written all the cards you can pop them in into each of your Pis. Don&apos;t power on the Pis as of yet. Connect the Pis to the switch using the ethernet cables and ensure that the switch is connected to the router. Then, turn on each Pi and proceed with the Network setup.</p><h3 id="network-setup">Network Setup</h3><p>Once the Pis are turned on, connect the monitor and keyboard to each Pi in turn and proceed as follows:</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">Change the hostnames to what you prefer. Don&apos;t copy the ones in here as they won&apos;t work.</div></div><ol>
<li>
<p>Change the hostname by <code>sudo hostnamectl set-hostname clstr-01-cp-01</code>. This will change the file <code>/etc/hostname</code> to add the hostname to it.<br>
The <code>hostname</code> file only keeps track of the system hostname and should not be a FQDN.</p>
</li>
<li>
<p>Open <code>/etc/hosts</code> and add the following:</p>
<pre><code>127.0.1.1 clstr-01-cp-01 clstr-01-cp-01.sayakm.me
</code></pre>
<p>So the complete file looks like</p>
<pre><code>127.0.0.1 localhost
127.0.1.1 clstr-01-cp-01 clstr-01-cp-01.sayakm.me

# The following lines are desirable for IPV6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00:0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
</code></pre>
<p>This will ensure that calling by the hostname or the FQDN from within the system will loopback.</p>
</li>
<li>
<p>The next files to be changed will be done using <code>netplan</code>. This tool updates the <code>resolv.conf</code> via a symlink so <strong>DON&apos;T</strong> update <code>resolv.conf</code>. Moreover, <code>dhcpcd.conf</code> doesn&apos;t exist in Ubuntu so DHCP config needs to be changed from <code>netplan</code> which updates <code>/run/systemd/network</code> so that it is flushed and recreated at boot. So, <code>/etc/systemd/network/</code> doesn&apos;t have anything. To configure <code>netplan</code>, create a new file <code>/etc/netplan/60-static-ip.yaml</code>. This will override the existing file that we don&apos;t want to change. Add the following to the file</p>
<pre><code class="language-yaml">network:
    version: 2
    ethernets:
        eth0:
            dhcp4: no
            addresses: [192.168.0.50/24]
            gateway4: 192.168.0.1
            nameservers:
                addresses: [8.8.8.8,8.8.4.4,1.1.1.1,1.0.0.1]
</code></pre>
<p>The above turns off DHCP. That means we need to manually configure what gateway and nameservers to use, as this was earlier provided by the DHCP server. The <code>addresses</code> clause is not only setting the static IP but also setting the subnet mask. The <code>gateway</code> is taken from the router properties. The <code>nameservers</code> are from Google and Cloudflare. Finally do <code>sudo netplan apply</code> to commit these changes.</p>
</li>
<li>
<p>Finally we will update the <code>etc/hosts</code> file so that the FQDNs are resolved properly when called from cluster to cluster. Add the following to the file in each cluster</p>
<pre><code>192.168.0.50 clstr-01-cp-01 clstr-01-cp-01.sayakm.me
192.168.0.51 clstr-01-cp-02 clstr-01-cp-02.sayakm.me
192.168.0.52 clstr-01-cp-03 clstr-01-cp-03.sayakm.me
192.168.0.100 clstr-01-nd-01 clstr-01-cp-01.sayakm.me
192.168.0.101 clstr-01-nd-02 clstr-01-cp-02.sayakm.me
192.168.0.102 clstr-01-nd-03 clstr-01-cp-03.sayakm.me
</code></pre>
<p>so that the file finally looks like this</p>
<pre><code>127.0.0.1 localhost
127.0.1.1 clstr-01-cp-01 clstr-01-cp-01.sayakm.me

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

192.168.0.50 clstr-01-cp-01 clstr-01-cp-01.sayakm.me
192.168.0.51 clstr-01-cp-02 clstr-01-cp-02.sayakm.me
192.168.0.52 clstr-01-cp-03 clstr-01-cp-03.sayakm.me
192.168.0.100 clstr-01-nd-01 clstr-01-cp-01.sayakm.me
192.168.0.101 clstr-01-nd-02 clstr-01-cp-02.sayakm.me
192.168.0.102 clstr-01-nd-03 clstr-01-cp-03.sayakm.me
</code></pre>
</li>
<li>
<p>Now we start installing k8s related stuff. Start with <code>containerd</code> CRE. Run</p>
<pre><code class="language-sh">sudo apt-get update
sudo apt-get install containerd
</code></pre>
<p>Next, we have to initialise the default config if not present. Check if the folder <code>/etc/containerd</code> exists. If not, create it.</p>
<pre><code class="language-sh">sudo mkdir -p /etc/containerd
</code></pre>
<p>Then create the <code>config.toml</code> file.</p>
<pre><code class="language-sh">containerd config default | sudo tee /etc/containerd/config.toml
</code></pre>
<p>Next, we need to use <code>systemd</code> cgroup driver in the config file with <code>runc</code>. Update the above <code>config.toml</code> file with</p>
<pre><code>[plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes.runc]
...
[plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes.runc.options]
    SystemdCgroup = true
</code></pre>
<p>The restart the <code>containerd</code> service.</p>
<pre><code class="language-sh">sudo systemctl restart containerd
</code></pre>
</li>
<li>
<p>We need to load some networking modules for iptables to work. Load the <code>br_netfilter</code> module.</p>
<pre><code class="language-sh">sudo modprobe br_netfilter
</code></pre>
<p>We also need to ensure that this module is alwasys loaded on boot. So add it to a file in the <code>modules-load.d</code> folder.</p>
<pre><code class="language-sh">cat &lt;&lt;EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
</code></pre>
<p>Then to ensure that the node iptables correctly see bridged traffic, we need to add the following to the <code>/etc/sysctl.d/k8s.conf</code>. Also reload sysctl.</p>
<pre><code class="language-sh">cat &lt;&lt;EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-iptables  = 1
EOF
sudo sysctl --system
</code></pre>
<p>Restart the <code>containerd</code> server.</p>
<pre><code class="language-sh">sudo systemctl restart containerd
</code></pre>
</li>
<li>
<p>Now we are ghoing to install kubernetes and related packages. Start by<br>
updating the packages and installing <code>apt-transport-https</code> and <code>curl</code>. Then add the GPG key of the repo.</p>
<pre><code class="language-sh">sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
</code></pre>
<p>Then add the Kubernetes <code>apt</code> repository. This repo still doesn&apos;t have a folder newer than <code>xenial</code>.</p>
<pre><code class="language-sh">echo &quot;deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main&quot; | sudo tee /etc/apt/sources.list.d/kubernetes.list
</code></pre>
<p>Then install <code>kubelet</code>, <code>kubeadm</code>, and <code>kubectl</code> and pin their versions.</p>
<pre><code class="language-sh">sudo apt-get update
sudo apt-get install kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
</code></pre>
<p>The last command is used to ensure that upgrades don&apos;t change their versions.</p>
</li>
<li>
<p>It is also important to have swap disabled permanently. It is already disabled if you have 8GB RAM during installation. If it&apos;s not disabled, disable it using the following</p>
<pre><code>sudo sed -i &apos;/ swap / s/^\(.*\)$/#\1/g&apos; /etc/fstab
sudo swapoff -a
</code></pre>
</li>
<li>
<p>Next we need to ensure if the memory cgroups are enabled. To check, use <code>cat /proc/cgroups</code> and see of the value of <code>enabled</code> for <code>memory</code> is <code>1</code> or not. If not, we have to edit the file <code>/boot/firmware/cmdline.txt</code> and add the following to the end of the line</p>
<pre><code>cgroup_enable=memory
</code></pre>
<p>Then reboot the system. After the reboot, check again if the value of <code>enabled</code> for <code>memory</code> is <code>1</code> or not.</p>
</li>
<li>
<p>(<strong>FIRST MASTER ONLY</strong>) The next steps include setting the system up for a Highly Available Control Plane. This neccesitates the presence of a Load Balancer and in this case we are going to use a software best Load Balancer called <code>kube-vip</code>. To install <code>kube-vip</code>, first we need to create a config file which will be used to convert it into a manifest which will be use by <code>kubeadm</code> while initialising to create a static pod of <code>kube-vip</code>. Start by pulling the image of <code>kube-vip</code> and creating an alias to run the container.</p>
<pre><code class="language-sh">sudo ctr images pull  ghcr.io/kube-vip/kube-vip:v0.4.0
alias kube-vip=&quot;sudo ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:v0.4.0 vip /kube-vip&quot;
</code></pre>
<p>You can optionally permanently store the above alias in the <code>~/.bash_aliases</code>.</p>
<pre><code class="language-sh">echo alias kube-vip=\&quot;sudo ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:v0.4.0 vip /kube-vip\&quot; | tee -a ~/.bash_aliases

. ~/.bashrc
</code></pre>
</li>
<li>
<p>(<strong>FIRST MASTER ONLY</strong>) Next generate the manifest for the static pod. We are turning on HA for the control plane, and load balancers for both the control plane and the worker nodes.</p>
<pre><code class="language-sh">kube-vip manifest pod \
    --interface eth0 \
    --vip 192.168.0.150 \
    --controlplane \
    --services \
    --arp \
    --leaderElection \
    --enableLoadBalancer | sudo tee /etc/kubernetes/manifests/kube-vip.yaml
</code></pre>
</li>
<li>
<p>(<strong>FIRST MASTER ONLY</strong>) Finally, we initialise the cluster using <code>kubeadm init</code>. We need to ensure that the installed kubernetes version is the same as kubeadm. So, first do</p>
<pre><code class="language-sh">KUBE_VERSION=$(sudo kubeadm version -o short)
</code></pre>
<p>Then, initialise the cluster</p>
<pre><code class="language-sh">sudo kubeadm init --control-plane-endpoint &quot;192.168.0.150:6443&quot; --upload-certs --kubernetes-version=$KUBE_VERSION --pod-network-cidr=10.244.0.0/16
</code></pre>
<p><code>--control-plane-endpoint</code> is the IP address of the load balancer as set earlier in <code>kube-vip</code>.</p>
<p><code>--upload-certs</code> is used to upload the certificates to the kubernetes cluster automatically without us needing to supply them.</p>
<p><code>--kubernetes-version</code> is the version of kubernetes that you are using so that newer versions are not automatically used.</p>
<p><code>--pod-network-cidr</code> is the CIDR block for the pod network. This is necessary for Flannel to work.</p>
<p>The output should look something like this:</p>
<pre><code>[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using &apos;kubeadm config images pull&apos;
[certs] Using certificateDir folder &quot;/etc/kubernetes/pki&quot;
[certs] Generating &quot;ca&quot; certificate and key
[certs] Generating &quot;apiserver&quot; certificate and key
[certs] apiserver serving cert is signed for DNS names [clstr-01-cp-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.50 192.168.0.150]
[certs] Generating &quot;apiserver-kubelet-client&quot; certificate and key
[certs] Generating &quot;front-proxy-ca&quot; certificate and key
[certs] Generating &quot;front-proxy-client&quot; certificate and key
[certs] Generating &quot;etcd/ca&quot; certificate and key
[certs] Generating &quot;etcd/server&quot; certificate and key
[certs] etcd/server serving cert is signed for DNS names [clstr-01-cp-01 localhost] and IPs [192.168.0.50 127.0.0.1 ::1]
[certs] Generating &quot;etcd/peer&quot; certificate and key
[certs] etcd/peer serving cert is signed for DNS names [clstr-01-cp-01 localhost] and IPs [192.168.0.50 127.0.0.1 ::1]
[certs] Generating &quot;etcd/healthcheck-client&quot; certificate and key
[certs] Generating &quot;apiserver-etcd-client&quot; certificate and key
[certs] Generating &quot;sa&quot; key and public key
[kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot;
[kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;kubelet.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot;
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot;
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot;
[control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot;
[control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot;
[control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot;
[etcd] Creating static Pod manifest for local etcd in &quot;/etc/kubernetes/manifests&quot;
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory &quot;/etc/kubernetes/manifests&quot;. This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.562791 seconds
[upload-config] Storing the configuration used in ConfigMap &quot;kubeadm-config&quot; in the &quot;kube-system&quot; Namespace
[kubelet] Creating a ConfigMap &quot;kubelet-config-1.22&quot; in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace
[upload-certs] Using certificate key:
feb5064c88b7e3a154b5deb1d6fb379036e7a4b76862fcf08c742db7031624d9
[mark-control-plane] Marking the node clstr-01-cp-01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node clstr-01-cp-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 39w134.5w0n8s3ktz63rv47
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the &quot;cluster-info&quot; ConfigMap in the &quot;kube-public&quot; namespace
[kubelet-finalize] Updating &quot;/etc/kubernetes/kubelet.conf&quot; to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run &quot;kubectl apply -f [podnetwork].yaml&quot; with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 192.168.0.150:6443 --token REDACTED \
        --discovery-token-ca-cert-hash sha256:REDACTED \
        --control-plane --certificate-key REDACTED

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
&quot;kubeadm init phase upload-certs --upload-certs&quot; to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.150:6443 --token REDACTED \
        --discovery-token-ca-cert-hash sha256:REDACTED
</code></pre>
<p>The <code>token</code> has a TTL of 24 hours and the uploded certs will be deleted in two hours. So, if the other nodes are to be joined later than that, we need to re-generate the token and upload the certs again.</p>
<p>a. First, check if tokens are available</p>
<pre><code class="language-sh">sudo kubeadm token list
</code></pre>
<p>If nothing is shown, proceed with generatiung the token. Else, we can use the token which has a usage of <code>authentication,signing</code></p>
<pre><code class="language-sh">sudo kubeadm token create
</code></pre>
<p>b. If during <code>kubeadm join</code>, any certificate errors come up, re-upload the certificates using</p>
<pre><code class="language-sh">sudo kubeadm init phase upload-certs --upload-certs
</code></pre>
</li>
<li>
<p>(<strong>OTHER MASTER ONLY</strong>) If you want to add additional control plane nodes, you can use the <code>kubeadm join</code> command:</p>
<pre><code class="language-sh">KUBE_VERSION=$(sudo kubeadm version -o short)
</code></pre>
<p>Then, join the cluster</p>
<pre><code class="language-sh">sudo kubeadm join 192.168.0.150:6443 --token REDACTED --discovery-token-ca-cert-hash sha256:REDACTED --control-plane --certificate-key feb5064c88b7e3a154b5deb1d6fb379036e7a4b76862fcf08c742db7031624d9
</code></pre>
<p>The output should look something like this:</p>
<pre><code>[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with &apos;kubectl -n kube-system get cm kubeadm-config -o yaml&apos;
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using &apos;kubeadm config images pull&apos;
[download-certs] Downloading the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace
[certs] Using certificateDir folder &quot;/etc/kubernetes/pki&quot;
[certs] Generating &quot;front-proxy-client&quot; certificate and key
[certs] Generating &quot;etcd/server&quot; certificate and key
[certs] etcd/server serving cert is signed for DNS names [clstr-01-cp-03 localhost] and IPs [192.168.0.52 127.0.0.1 ::1]
[certs] Generating &quot;etcd/peer&quot; certificate and key
[certs] etcd/peer serving cert is signed for DNS names [clstr-01-cp-03 localhost] and IPs [192.168.0.52 127.0.0.1 ::1]
[certs] Generating &quot;etcd/healthcheck-client&quot; certificate and key
[certs] Generating &quot;apiserver-etcd-client&quot; certificate and key
[certs] Generating &quot;apiserver&quot; certificate and key
[certs] apiserver serving cert is signed for DNS names [clstr-01-cp-03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.52 192.168.0.150]
[certs] Generating &quot;apiserver-kubelet-client&quot; certificate and key
[certs] Valid certificates and keys now exist in &quot;/etc/kubernetes/pki&quot;
[certs] Using the existing &quot;sa&quot; key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot;
[kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file
[control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot;
[control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot;
[control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot;
[control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot;
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot;
[kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot;
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for &quot;etcd&quot;
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The &apos;update-status&apos; phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node clstr-01-cp-03 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node clstr-01-cp-03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run &apos;kubectl get nodes&apos; to see this node join the cluster.
</code></pre>
</li>
<li>
<p>(<strong>OTHER MASTER ONLY</strong>) Then, perform step 11 on the other master nodes. This is because <code>kubeadm</code> doesn&apos;t like a non-empty <code>/etc/kubernetes/manifests</code> folder.</p>
</li>
<li>
<p>(<strong>WORKER ONLY</strong>) Then, we join the worker nodes into the cluster using <code>kubeadm join</code>. We need to ensure that the installed kubernetes version is the same as kubeadm. So, first do</p>
<pre><code class="language-sh">KUBE_VERSION=$(sudo kubeadm version -o short)
</code></pre>
<p>Then, join the cluster</p>
<pre><code class="language-sh">sudo kubeadm join 192.168.0.150:6443 --token REDACTED --discovery-token-ca-cert-hash sha256:REDACTED
</code></pre>
<p>The output should look something like this</p>
<pre><code>[preflight] Running pre-flight checks
    [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with &apos;kubectl -n kube-system get cm kubeadm-config -o yaml&apos;
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot;
[kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot;
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run &apos;kubectl get nodes&apos; on the control-plane to see this node join the cluster.
</code></pre>
</li>
<li>
<p>(<strong>MASTER ONLY</strong>) Next follow the instructions in the above output to create the kubectl config file in the home directory.</p>
<pre><code class="language-sh">mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
</li>
<li>
<p>(<strong>FIRST MASTER ONLY</strong>) Next we will deploy the pod network. We are going to use Flannel for this. We are using Flannel 0.15.0 for this. It&apos;s always better to anchor the version.</p>
<pre><code class="language-sh">kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.15.0/Documentation/kube-flannel.yml
</code></pre>
</li>
<li>
<p>(<strong>WORKER ONLY</strong>) Next copy the kubectl config file to the worker nodes. We can&apos;t use sudo to copy from the remote system and ubuntu disables the root user. So, we need to copy over the config present in the <code>$HOME/.kube/config</code> file.</p>
<pre><code class="language-sh">mkdir -p $HOME/.kube
scp ubuntu@clstr-01-cp-01:~/.kube/config $HOME/.kube/config
</code></pre>
</li>
</ol>
<h2 id="accessing-the-cluster">Accessing the Cluster</h2><p>Now that the cluster is all set up, we can start connecting to it from outside the cluster. We do this by copying the config file from any one of the nodes. Run the following in Powershell.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">Only running the <code spellcheck="false" style="white-space: pre-wrap;">scp</code> command will result in any existing <code spellcheck="false" style="white-space: pre-wrap;">config</code> file to be overwritten, so ensure you backup that file with the <code spellcheck="false" style="white-space: pre-wrap;">Copy-Item</code> cmdlet as shown below.</div></div><pre><code class="language-powershell"># Backup any existing config file since the scp will overwrite such file. This will fail if no config file is present
Copy-Item $HOME\.kube\lol $HOME\.kube\config-bk

# We are using the IP directly as the hostname is not configured in the Windows system
scp ubuntu@192.168.0.50:~/.kube/config $HOME\.kube\config</code></pre><p>Now open a <a href="https://apps.microsoft.com/store/detail/windows-terminal/9N0DX20HK701?hl=en-in&amp;gl=IN&amp;ref=sayakm.me">terminal</a> and run <code>kubectl get nodes</code>. This should fetch all 6 nodes as follows.</p><pre><code class="language-powershell">NAME             STATUS   ROLES                  AGE    VERSION
clstr-01-cp-01   Ready    control-plane,master   224d   v1.22.2
clstr-01-cp-02   Ready    control-plane,master   222d   v1.22.2
clstr-01-cp-03   Ready    control-plane,master   222d   v1.22.2
clstr-01-nd-01   Ready    &lt;none&gt;                 224d   v1.22.2
clstr-01-nd-02   Ready    &lt;none&gt;                 224d   v1.22.2
clstr-01-nd-03   Ready    &lt;none&gt;                 224d   v1.22.2</code></pre><h3 id="thats-all-folks">That&apos;s all folks!</h3><p>And with that, you have a fully functional Kubernetes cluster ready to roll. There are some other challenges like persistent volumes but that&apos;s a blog post for another day.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2022/06/IMG_20211112_234951.jpg" class="kg-image" alt="Deploying a HA Kubernetes cluster on Raspberry Pi using Kubeadm" loading="lazy" width="2000" height="2667" srcset="https://sayakm.me/content/images/size/w600/2022/06/IMG_20211112_234951.jpg 600w, https://sayakm.me/content/images/size/w1000/2022/06/IMG_20211112_234951.jpg 1000w, https://sayakm.me/content/images/size/w1600/2022/06/IMG_20211112_234951.jpg 1600w, https://sayakm.me/content/images/2022/06/IMG_20211112_234951.jpg 2000w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">The pi cluster in its glory</span></figcaption></figure><p>If you have any questions or have spotted a mistake, feel free to <a href="https://twitter.com/defineSayak?ref=sayakm.me">tweet me</a>.</p>]]></content:encoded></item><item><title><![CDATA[Elite Dangerous Will NOT Be a Dead Game by 2024]]></title><description><![CDATA[<p>I recently saw a tweet about a dead horse getting beaten up one more time.</p>
<!--kg-card-begin: html-->
<blockquote class="twitter-tweet" data-theme="dark"><p lang="en" dir="ltr">Only just read this, and it&apos;s a couple of months old now, but it expresses the concerns and fears of many about the future of <a href="https://twitter.com/hashtag/EliteDangerous?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=sayakm.me">#EliteDangerous</a> and <a href="https://twitter.com/hashtag/elitedangerousodyssey?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=sayakm.me">#elitedangerousodyssey</a> worth a read. <a href="https://t.co/eikrb2KF31?ref=sayakm.me">https://t.</a></p></blockquote>]]></description><link>https://sayakm.me/elite-dangerous-will-not-be-a-dead-game-by-2024/</link><guid isPermaLink="false">65d0b550bc9457aff61c8ec9</guid><category><![CDATA[Elite Dangerous]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Wed, 13 Oct 2021 16:52:37 GMT</pubDate><media:content url="https://sayakm.me/content/images/2021/10/359320_20210925215900_1.png" medium="image"/><content:encoded><![CDATA[<img src="https://sayakm.me/content/images/2021/10/359320_20210925215900_1.png" alt="Elite Dangerous Will NOT Be a Dead Game by 2024"><p>I recently saw a tweet about a dead horse getting beaten up one more time.</p>
<!--kg-card-begin: html-->
<blockquote class="twitter-tweet" data-theme="dark"><p lang="en" dir="ltr">Only just read this, and it&apos;s a couple of months old now, but it expresses the concerns and fears of many about the future of <a href="https://twitter.com/hashtag/EliteDangerous?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=sayakm.me">#EliteDangerous</a> and <a href="https://twitter.com/hashtag/elitedangerousodyssey?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=sayakm.me">#elitedangerousodyssey</a> worth a read. <a href="https://t.co/eikrb2KF31?ref=sayakm.me">https://t.co/eikrb2KF31</a></p>&#x2014; Drew Wagar (@drewwagar) <a href="https://twitter.com/drewwagar/status/1447982388034916354?ref_src=twsrc%5Etfw&amp;ref=sayakm.me">October 12, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> 
<!--kg-card-end: html-->
<p>I am on a holiday right now so I have plenty of time to rage about people on the internet sharing their negative opinion on something I am positive about and writing a 2200 word blog post on why I think they are wrong. So here goes. I will first dissect the above blog post and then provide my own opinion on Elite Dangerous.</p><p>Before that, let me introduce myself. I am a long time Elite Dangerous player and go by <a href="https://cmdrgarud.blog/?ref=sayakm.me">CMDR Garud</a> in game. I decided to put this out in my personal blog as these thoughts are not a great fit for my RP blog. Moreover, it involves discussion of other games which I wouldn&apos;t do role playing as CMDR Garud. Consider these thoughts to be from Sayak and not from Garud.</p><h2 id="the-appeal-of-elite-dangerous">The Appeal of Elite Dangerous</h2><p>Here the writer is pretty much spot on. The fact that you can do pretty much whatever you want, go wherever you want with your ship in Elite Dangerous is a core appeal of the game. But the writer makes some assumptions about Elite Dangerous that is just not true.</p><p>In Elite Dangerous you are not truly alone. Sure, if you want to be alone, there is a whole galaxy out there to be away from everybody. But you need not be. Elite Dangerous is a multiplayer game too. You can easily group up with people and can do the same stuff that you would do alone, with a group. Elite Dangerous not only shines by hitting home the loneliness and the vastness of the galaxy but also the joys of social interactions.</p><p>And although Elite Dangerous falls in the same genre as No Man&apos;s Sky and Star Citizen, its very difficult to compare them. The comparison is not like all the generic AAA First Person Shooters available. The above 3 games have their focus and is not trying to be like the other and this is something gamers need to understand.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/10/359320_20210909014012_1.png" class="kg-image" alt="Elite Dangerous Will NOT Be a Dead Game by 2024" loading="lazy" width="1920" height="1080" srcset="https://sayakm.me/content/images/size/w600/2021/10/359320_20210909014012_1.png 600w, https://sayakm.me/content/images/size/w1000/2021/10/359320_20210909014012_1.png 1000w, https://sayakm.me/content/images/size/w1600/2021/10/359320_20210909014012_1.png 1600w, https://sayakm.me/content/images/2021/10/359320_20210909014012_1.png 1920w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A destroyed megaship looms solemnly</span></figcaption></figure><h2 id="wheres-the-flaw">Where&apos;s the flaw?</h2><p>It has been said time and again that Elite Dangerous is repetitive and grindy. Has those people seen the gaming market? Do they think that playing the same game with different guns and maps is not repetitive? COD is not repetitive? Rocket League is not repetitive? I know I will get hundreds of people saying they are not but what is the difference? </p><p>Let me tell you an experience I had with another MMO. Elder Scrolls Online. Initially I was having a great experience, doing quests and playing the story. Some people asked me if I actually read through NPC dialogues and I said &quot;Yes, of course!&quot;. Then once the story finished, I decided to complete the maps. This started a grind of me all side quests and dungeon running, and this made me realize what those people were talking about. So, I will change my style. I will stop trying to complete the map and take on the story more.</p><p>Honestly, if someone says that a game was repetitive after playing it for over 1500 hours, I would suggest to take that opinion with a pinch of salt.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/10/359320_20210829192034_1.png" class="kg-image" alt="Elite Dangerous Will NOT Be a Dead Game by 2024" loading="lazy" width="1920" height="1080" srcset="https://sayakm.me/content/images/size/w600/2021/10/359320_20210829192034_1.png 600w, https://sayakm.me/content/images/size/w1000/2021/10/359320_20210829192034_1.png 1000w, https://sayakm.me/content/images/size/w1600/2021/10/359320_20210829192034_1.png 1600w, https://sayakm.me/content/images/2021/10/359320_20210829192034_1.png 1920w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">A world to come</span></figcaption></figure><h2 id="nothing-except-self-motivation">Nothing, except self motivation</h2><p>I almost have no problem in this segment although it is written in a negative tone. So, let me reiterate it in a positive one.</p><p>Elite Dangerous is a Galaxy sized sandbox where the game doesn&apos;t railroad you in playing the way it wants. You play the game you want with the tools available to you. It focuses on realism and like any realistic scenario, you are not the Hero. You are nothing in the beginning. Your purpose in Elite Dangerous is to create your own path, blaze your own trail and have a journey from nothing to influencing your own region of space and even the future of the galaxy!</p><h2 id="what-keeps-elite-dangerous-running">What keeps Elite Dangerous running</h2><p>I agree that the community plays a big part in shaping Elite Dangerous. The tools developed by the community has become an indispensable part of the playing experience (I am <a href="https://elitebgs.app/?ref=sayakm.me">one of them</a> so I know). So, has the many community groups focused on a particular activity like the <a href="https://fuelrats.com/?ref=sayakm.me">Fuel Rats</a>. For that matter, I would like to see a game which doesn&apos;t require a community to survive.</p><p>The community is a huge part of the Elite Dangerous experience. You might be a solo player but being a part of the community can greatly increase the satisfaction that you can draw from the game.</p><p>At this stage I would like to stress the fact that it would be unfair to compare Elite Dangerous to any random game which doesn&apos;t need a player to be a part of the community to enjoy it. If you don&apos;t want to be a part of the community, you will probably not enjoy Elite Dangerous for long. And that is fine! Just like if you don&apos;t like horror, you won&apos;t enjoy a horror game for long no matter how well it is made!</p><h2 id="elite-dangerous-demise">Elite Dangerous&apos; demise</h2><p>Demise? Seriously? The author makes some pretty wild and incorrect claims in this part. Let me go through them. EVA repairs in spacesuits, ship interiors, landable atmospheric planets and capturing other ships were ideas. They were never promised as such. They were discussed as part of their vision and visions are never promises, because visions change over the course of time.</p><p>Moreover ship interiors were never cancelled. They weren&apos;t promised in the first place!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/10/359320_20210822012725_1.png" class="kg-image" alt="Elite Dangerous Will NOT Be a Dead Game by 2024" loading="lazy" width="1920" height="1080" srcset="https://sayakm.me/content/images/size/w600/2021/10/359320_20210822012725_1.png 600w, https://sayakm.me/content/images/size/w1000/2021/10/359320_20210822012725_1.png 1000w, https://sayakm.me/content/images/size/w1600/2021/10/359320_20210822012725_1.png 1600w, https://sayakm.me/content/images/2021/10/359320_20210822012725_1.png 1920w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">This is not the wreckage you are looking for</span></figcaption></figure><h2 id="half-baked-deliveries-everywhere">Half-baked deliveries everywhere</h2><p>This is a frequent complain. And this I kind of agree with. Elite Dangerous has the tendency to release a core gameplay loop first and then improve upon it later. Would it have been better to release things a gameplay loop fully finished? Maybe? But there is a risk that the community won&apos;t like the loop and it will be a lot of wasted effort so the developers will double down and so on. I don&apos;t see a great outcome from this.</p><p>Live service gaming is here to stay and I personally don&apos;t hate this system. It keeps things fresh or keeps our hopes for fresh things in the future.</p><p>My feelings end at &quot;something is missing&quot; and I can&apos;t make a cohesive argument as to exactly what it is. I have heard people say that Elite Dangerous lacks &quot;inter-connectivity&quot; between different parts of the gameplay. But if someone asks me to explain I won&apos;t be able to. As such, I keep my opinions my own. Cause its possible that this &quot;feeling&quot; is just a result of the community repeating a sentiment and me subconsciously echoing it. Maybe majority of the community just echoes this sentiment without understanding where it comes from.</p><h2 id="you-wouldnt-talk-you-wouldnt-listen">You wouldn&apos;t talk, you wouldn&apos;t listen</h2><p>Ahh! The age old &quot;no communication&quot; argument. This is a rhetoric not only from the Elite Dangerous community but in all major gaming communities. It seems like every individual wants to be heard. They argue that since they have bought a game, they have the right to be heard. But guess what? Over the years I have seen people on both sides of an argument. Almost every time a new feature is released, people get divided on either side. This means that the side whose arguments are not implemented, they will feel that they have not been heard.</p><p>The author talks about the developer&apos;s not playing the game. Well, they certainly don&apos;t play it for 10 hours a week cause they are working on making it. But do they not play it at all? Do people really want me to believe that a developer doesn&apos;t use the software they have developed?</p><p>Since we are talking from a developer&apos;s perspective, let me put in another revelation. A developer need not be using the software they built regularly. That includes games. I might be developing a CRM suite but at the end of the day, I have no need to use one for my day to day life.</p><p>Also, gamers need to understand that games are just software. Gamers are just another consumer of a software.</p><p><em>Sigh, </em>I got sidetracked. As you might know this part is a personal pet peeve of mine. Having the perfect amount of communication is next to impossible. The more you communicate, the more communication will be demanded. And any discussion of future ideas will immediately be converted as a promise. Case in point, the blog in question, as pointed earlier.</p><p>I am not the kind of person who complains without knowing a way to fix the thing I am complaining about. If I don&apos;t know how to fix, I get someone who knows how to fix. The community managers of Elite Dangerous are doing more than ever to keep the community updated with the latest happenings and honestly I don&apos;t see what better could be done.</p><h2 id="can-elite-dangerous-be-saved">Can Elite Dangerous be saved?</h2><p>It doesn&apos;t need saving in the first place. Elite Dangerous was always a niche game. NMS and Star Citizen always got more hype and as much I would like to see Elite Dangerous become more popular, I can understand that the huge learning curve it has can be a struggle to some. Hence, Elite Dangerous has perpetually been a low traffic (comparatively) game. </p><p>The Odyssey was disastrous. Nobody disagrees on that. But is that enough to kill the game? Not really. Player numbers have certainly dwindled but that should not be a death blow. More games have gone through rough patches and up until Odyssey, the game was improving in many ways, networking being one of the major ones. I am pretty sure that Odyssey has left its worst days behind and the latest updates have been promising. But I am not holding this to the author as the blog post in question was posted a couple of months ago.</p><p>As to the things the developers needs to do. I cannot recollect anything that they have promised and not delivered. Its a very different thing if the author and the community assumes some ideas to be promises and if they do so, its none of the developers fault. And if I understand this right, if the developers don&apos;t deliver these made up promises, they won&apos;t gain any trust. Well, that&apos;s a strange situation to be in.</p><p>Now, regarding the technicalities. I don&apos;t know how the author assumes anything about the COBRA engine and its capabilities but its correct that bad tools can cause severe headache for the developers. But can developers just ignore a requirement because it would be a pain to develop? I don&apos;t understand how the author makes this assumption. Since the author calls themselves a computer scientist, I presume that wherever they work they get this kind of flexibility. I have not seen something like that happen in my immediate work network.</p><p>Regarding the concept of generating a minimum viable product (MVP). Honestly, it seems like people have not seen what a MVP looks like. This is much more complete that any MVP will ever be. I just wish people would stop echoing the same rhetoric without having any understanding of MVP.</p><p>Also, a note to the author. Elite Dangerous is <u>not</u> your game. In this &quot;purely capitalist&quot; world, it is the game of Frontier Development. And no manager would want to lose out on a profit margin because no one likes to lose their job. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/10/359320_20210520162949_1.png" class="kg-image" alt="Elite Dangerous Will NOT Be a Dead Game by 2024" loading="lazy" width="1920" height="1080" srcset="https://sayakm.me/content/images/size/w600/2021/10/359320_20210520162949_1.png 600w, https://sayakm.me/content/images/size/w1000/2021/10/359320_20210520162949_1.png 1000w, https://sayakm.me/content/images/size/w1600/2021/10/359320_20210520162949_1.png 1600w, https://sayakm.me/content/images/2021/10/359320_20210520162949_1.png 1920w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">I don&apos;t look this depressed in real life I swear</span></figcaption></figure><h2 id="my-personal-thoughts-on-elite-dangerous">My personal thoughts on Elite Dangerous</h2><p>I started playing Elite Dangerous in summer 2016. Till date, I have around 1850 hours in it. Its a small number considering I have played over 5 years. The major reason being that I took things slow. I didn&apos;t rush to my Anaconda. I got my Imperial Clipper and Federal Corvette a few months back. I didn&apos;t play Elite Dangerous like my life depended on it. Sometimes I would play it a lot and get close to a burn out and every time I would take breaks. The author mentions about the love-hate relationship with Elite Dangerous. For me, my relationship with Elite Dangerous has been more of a healthy one. I gave the game space and in return the game stayed enjoyable.</p><p>Elite Dangerous is not your average FPS and it is best to not treat it as such. Get engaged with the community, take breaks regularly and try to do the various things that the game has to offer.</p><p>I don&apos;t consider myself as a gamer. And if I can enjoy Elite Dangerous, so can you.</p><p>Fly safe. o7</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://sayakm.me/content/images/2021/10/359320_20210809232217_1.png" class="kg-image" alt="Elite Dangerous Will NOT Be a Dead Game by 2024" loading="lazy" width="1920" height="1080" srcset="https://sayakm.me/content/images/size/w600/2021/10/359320_20210809232217_1.png 600w, https://sayakm.me/content/images/size/w1000/2021/10/359320_20210809232217_1.png 1000w, https://sayakm.me/content/images/size/w1600/2021/10/359320_20210809232217_1.png 1600w, https://sayakm.me/content/images/2021/10/359320_20210809232217_1.png 1920w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">o7</span></figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Banking applications, please listen to your security experts!]]></title><description><![CDATA[<p>Honestly, I have had it with Banking applications and their God awful &quot;security&quot; measures. Its almost as if they <em>want</em> malicious actors to get access to their customer&apos;s accounts. And its not just any one bank in particular, I have seen the same thing over and</p>]]></description><link>https://sayakm.me/banking-applications-please-listen-to-your-security-experts/</link><guid isPermaLink="false">65d0b550bc9457aff61c8ec7</guid><category><![CDATA[Rant]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Sat, 09 Oct 2021 16:33:34 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1565515636369-57f6e9f5fe79?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDUxfHw1MDAlMjBydXBlZSUyMGJhbmtub3RlJTIwcnVwaXhlbi5jb218ZW58MHx8fHwxNzA4MTk1ODY1fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1565515636369-57f6e9f5fe79?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDUxfHw1MDAlMjBydXBlZSUyMGJhbmtub3RlJTIwcnVwaXhlbi5jb218ZW58MHx8fHwxNzA4MTk1ODY1fDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Banking applications, please listen to your security experts!"><p>Honestly, I have had it with Banking applications and their God awful &quot;security&quot; measures. Its almost as if they <em>want</em> malicious actors to get access to their customer&apos;s accounts. And its not just any one bank in particular, I have seen the same thing over and over again with multiple ones, at least those in India. I can&apos;t comment on other international banks but I have a hunch they would be pretty much the same.</p><p>Now, where should I start. How about something that not only banks but even big name tech companies are guilty of.</p><h2 id="expiring-passwords">Expiring passwords</h2><p>Microsoft <a href="https://docs.microsoft.com/en-gb/archive/blogs/secguide/security-baseline-final-for-windows-10-v1903-and-windows-server-v1903?ref=sayakm.me">released</a> a lengthy explainer on why password expiration is a bad practice (scroll down a bit in the link). In short:</p><ol><li>When humans are forced to change their passwords, too often they&#x2019;ll make a small and predictable alteration to their existing passwords, and/or forget their new passwords.</li><li>Periodic password expiration is a defense <em>only</em> against the probability that a password (or hash) will be stolen during its validity interval and will be used by an unauthorized entity. If a password is never stolen, there&#x2019;s no need to expire it. And if you have evidence that a password has been stolen, you would presumably act immediately rather than wait for expiration to fix the problem.</li><li>What should the recommended expiration period be? If an organization has successfully implemented banned-password lists, multi-factor authentication, detection of password-guessing attacks, and detection of anomalous logon attempts, do they need <em>any</em> periodic password expiration? And if they haven&#x2019;t implemented modern mitigations, how much protection will they <em>really</em> gain from password expiration?</li></ol><p>The above points are directly copies from the linked article. I suggest everyone to have a read through that. Password expiry is an archaic practice that I have no idea why anyone came up with. It gives a false sense of security to the organizations and causes unnecessary friction to the end users. Just STOP.</p><figure class="kg-card kg-embed-card"><iframe src="https://tenor.com/embed/5001372" width="600" height="400" frameborder="0"></iframe></figure><h2 id="disabling-right-clickcontext-menu">Disabling right click/context menu</h2><p><strong>Me:</strong> Okayyyyy, so I have an extremely complex password that has been generated by Hardware Random Number Generator which is stored in an encrypted vault that I need biometric access to and I will copy the password since there is no way to remember it and paste it onto the password fie...</p><p><strong>Banking Portal:</strong> NOOO! You can&apos;t go around pasting text into the password field. What if, what if a <em>hacker </em>ummm does something. Yeah, like uh, copy your password from this field, yeahh, that&apos;s it!</p><p><strong>Me:</strong> But I have already copied it onto my clipboard. If someone is sniffing my clipboard, then its already gone and its my fault for not protecting my system. Moreover, why disable pastes? What will a &quot;hacker&quot; do by pasting on behalf of me?</p><p><strong>Banking Portal:</strong> Rrreeeeee...</p><p>Thankfully, the above is not a conversation I had to have in my real life. Why on earth would anyone disable context menus in the login forms is beyond me. What kind of attack surface are they trying to protect? Is it worth it to prevent a user from using a complex password and use &quot;Name@123&quot;. Is the attack surface caused by bad passwords better smaller than the one being protected by the above. I am lost on this. I dare not even google it up. If anyone has any insights, please tweet me...</p><h2 id="sending-otp-over-sms">Sending OTP over SMS</h2><p>When this one is looked in conjunction with the others, it makes the whole thing a criminal negligence. Sending OTPs over SMS is an extremely insecure practice. Full Stop. You might have all the security in the world, but if the OTP is jacked (and there are ways it can be), you will instantly lose access to your account. I am yet to see a single bank implementing any reliable 2FA of any sorts. Organisations need to realise how insecure this method is and stop implementing on new projects. Especially for such security critical tasks.</p><h2 id="frequent-session-expiration">Frequent session expiration</h2><p>Ok, so this one is not the fault of the banks. And it might be a bit controlversial one too. This is because the above is because of a <a href="https://www.pcisecuritystandards.org/document_library?category=pcidss&amp;document=pci_dss&amp;ref=sayakm.me">certain requirement</a> of PCI DSS (Payment Card Industry Data Security Standard). Reqquirement 8.1.8 states that:</p><blockquote>If  a session has been idle for more than 15 minutes, require the user to re-authenticate to re-activate the terminal or session.</blockquote><p>Now, I get, I get. The intention of this requirement is to ensure that someone else isn&apos;t able to access the account from the same system when the user walks away. It even mentioned in the above doc:</p><blockquote>When users walk away from an open machine with access to critical system components or cardholder data, that machine may be  used by others in the user&#x2019;s absence, resulting in unauthorized account access and/or misuse. </blockquote><p>Isn&apos;t there a discrepancy between the intention and the implementation. The intention is surely when the user &quot;walks away&quot; or isn&apos;t in front of the system. So, why does the session still time out when I am looking at my bank balance and dwelling in my pain. Should switching to a different tab be considered &quot;walking away&quot;? Should banks start monitoring users via a webcam to see if the user is &quot;walking away&quot;? In my opinion, this is a poorly worded and an even poorly implemented requirement. Cause the only way to keep the session running would be click something (which I guess sends a request to the backend which refreshes the stay alive lifetime) or click on the annoying popup which asks me if I am there 1 minute before the session expires. Here is a better (allegedly in my head) idea that I cooked up in a couple of seconds. How about sending a stay alive pulse on mouse movement. That won&apos;t solve my staring into my transactions while I reconsider my life choices but should provide a much better experience to most other users. All in all, this needs better implementation and I wish OS developers and web developers came together to find a better way to implement this.</p><p>All in all, these are some of the great UX problems in banking applications that they just pander around as &quot;security features&quot;. They are not! I am pretty sure that this is just the tip of the iceberg as there is a vast ocean of banking applications that I certainly haven&apos;t used. Let me know via a tweet if you have faced something similar or anything new.</p>]]></content:encoded></item><item><title><![CDATA[I tried using ES Modules, I really did...]]></title><description><![CDATA[<p>I am an early adopter of technology, as long I don&apos;t have to pay and what better way to do that than using the latest and the greatest of the frameworks or tools while developing. This is my experience of trying to use ES Modules in my TypeScript</p>]]></description><link>https://sayakm.me/i-tried-using-esmodules-i-really-did/</link><guid isPermaLink="false">65d0b550bc9457aff61c8ec8</guid><category><![CDATA[Programming]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Tue, 05 Oct 2021 17:20:18 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1555949963-ff9fe0c870eb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fHdlYmRlc2lnbiUyMGN5YmVyc3BhY2UlMjBzdHJlYW0lMjBzdHJpbmclMjBjbG9zZXVwJTIwY29kZXIlMjBjcnlwdGljJTIwbnVtYmVycyUyMEtleWJvYXJkJTIwYmFja2dyb3VuZHMlMjBkaWdpdHMlMjBkZXZlbG9wbWVudHxlbnwwfHx8fDE3MDgxOTYwOTl8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1555949963-ff9fe0c870eb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fHdlYmRlc2lnbiUyMGN5YmVyc3BhY2UlMjBzdHJlYW0lMjBzdHJpbmclMjBjbG9zZXVwJTIwY29kZXIlMjBjcnlwdGljJTIwbnVtYmVycyUyMEtleWJvYXJkJTIwYmFja2dyb3VuZHMlMjBkaWdpdHMlMjBkZXZlbG9wbWVudHxlbnwwfHx8fDE3MDgxOTYwOTl8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="I tried using ES Modules, I really did..."><p>I am an early adopter of technology, as long I don&apos;t have to pay and what better way to do that than using the latest and the greatest of the frameworks or tools while developing. This is my experience of trying to use ES Modules in my TypeScript project and my thoughts on it.</p><p>I am working on a library to scaffold discord bots, (I maintain a couple of them and a scaffold would decrease a lot of the redundant work) and I use TypeScript for that. Now, I started my project using <a href="https://tsdx.io/?ref=sayakm.me">tsdx</a> but I soon <a href="https://github.com/jaredpalmer/tsdx/issues/1065?ref=sayakm.me">found out</a> that it was not maintained nowadays. So, instead of using a fork, I decided to roll out my own Typescript starter to use it myself and for the community. Enter <a href="https://github.com/SayakMukhopadhyay/ts-boot?ref=sayakm.me">ts-boot</a> which is still a WIP (contributions welcome).</p><p>Since I had a clean slate in front of me, I was thinking of using the latest standards and what could be more latest than using ES Modules instead of CommonJS. TypeScript with ES2020 targeting Node 14+ and ES Modules would be bleeding edge and a dream to develop right?</p><p>Wrong!</p><p>Now before the internet people starts judging me I should state that this is not my first rodeo in TypeScript. I have a love-hate relationship with TypeScript. I keep coming back to Typescript only to get frustrated again. I love typed languages yet the disjoint way that most libraries have their code and typings separated (via <a href="https://github.com/DefinitelyTyped/DefinitelyTyped?ref=sayakm.me">DefinitelyTyped</a>) makes things more cumbersome that it needs to be. Quite a few times I have faced the issue wherein the <code>@types</code> package has not been updated for a package that has been. This causes friction that I could have avoided if I just stuck with JavaScript. But those sweet sweet types brought me back again and again.</p><p>Now don&apos;t get me wrong. I hardly face any issues nowadays. Many large packages provide their own types which makes things much smoother and for those that still have broken types, I can use <code>any</code> casting to force things to work. Now you might ask, why I don&apos;t contribute by fixing the types. I do. But working with the DefinitelyTyped monorepo with its bazillion files is not exactly a great experience. I can do a bare clone in git but still, its not exactly a great experience.</p><p>So, knowing all these pains, I still decided to not only use TypeScript but also use ES Modules in my latest project. This is what my <code>package.json</code> and <code>tsconfig.json</code> looks like:</p><figure class="kg-card kg-code-card"><pre><code class="language-json">{
  &quot;name&quot;: &quot;ts-boot&quot;,
  &quot;version&quot;: &quot;0.0.1&quot;,
  &quot;description&quot;: &quot;Typescript project bootstrapper&quot;,
  &quot;keywords&quot;: [
    &quot;typescript&quot;,
    &quot;boot&quot;
  ],
  &quot;type&quot;: &quot;module&quot;,
  &quot;homepage&quot;: &quot;https://github.com/SayakMukhopadhyay/ts-boot#readme&quot;,
  &quot;bugs&quot;: {
    &quot;url&quot;: &quot;https://github.com/SayakMukhopadhyay/ts-boot/issues&quot;
  },
  &quot;repository&quot;: {
    &quot;type&quot;: &quot;git&quot;,
    &quot;url&quot;: &quot;git+https://github.com/SayakMukhopadhyay/ts-boot.git&quot;
  },
  &quot;license&quot;: &quot;Apache-2.0&quot;,
  &quot;author&quot;: &quot;Sayak Mukhopadhyay &lt;mukhopadhyaysayak@gmail.com&gt;&quot;,
  &quot;main&quot;: &quot;index.js&quot;,
  &quot;bin&quot;: {
    &quot;tsboot&quot;: &quot;./dist/index.js&quot;
  },
  &quot;files&quot;: [
    &quot;dist&quot;,
    &quot;templates&quot;
  ],
  &quot;scripts&quot;: {
    &quot;start:create&quot;: &quot;ts-node src/index.ts create myproject&quot;,
    &quot;start:create:basic&quot;: &quot;ts-node src/index.ts create myproject -t basic&quot;,
    &quot;start:build&quot;: &quot;ts-node src/index.ts build lmao&quot;,
    &quot;build&quot;: &quot;tsc&quot;,
    &quot;build:watch&quot;: &quot;tsc --watch&quot;,
    &quot;prepare&quot;: &quot;husky install&quot;,
    &quot;lint&quot;: &quot;eslint \&quot;{src,templates,test}/**/*.ts\&quot;&quot;,
    &quot;lint:fix&quot;: &quot;eslint \&quot;{src,templates,test}/**/*.ts\&quot; --fix&quot;,
    &quot;test&quot;: &quot;jest&quot;,
    &quot;test:watch&quot;: &quot;jest --watch&quot;,
    &quot;test:cov&quot;: &quot;jest --coverage&quot;,
    &quot;test:debug&quot;: &quot;node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand&quot;,
    &quot;test:e2e&quot;: &quot;jest --config ./test/jest-e2e.json&quot;
  },
  &quot;dependencies&quot;: {
    &quot;commander&quot;: &quot;^8.2.0&quot;,
    &quot;fs-extra&quot;: &quot;^10.0.0&quot;,
    &quot;inquirer&quot;: &quot;^8.1.5&quot;,
    &quot;mustache&quot;: &quot;^4.2.0&quot;,
    &quot;typescript&quot;: &quot;^4.4.3&quot;
  },
  &quot;engines&quot;: {
    &quot;node&quot;: &quot;&gt;=14&quot;
  },
  &quot;devDependencies&quot;: {
    &quot;@types/fs-extra&quot;: &quot;^9.0.13&quot;,
    &quot;@types/inquirer&quot;: &quot;^8.1.3&quot;,
    &quot;@types/jest&quot;: &quot;^27.0.2&quot;,
    &quot;@types/mustache&quot;: &quot;^4.1.2&quot;,
    &quot;@typescript-eslint/eslint-plugin&quot;: &quot;^4.32.0&quot;,
    &quot;@typescript-eslint/parser&quot;: &quot;^4.32.0&quot;,
    &quot;eslint&quot;: &quot;^7.32.0&quot;,
    &quot;eslint-config-prettier&quot;: &quot;^8.3.0&quot;,
    &quot;eslint-plugin-prettier&quot;: &quot;^4.0.0&quot;,
    &quot;husky&quot;: &quot;^7.0.2&quot;,
    &quot;jest&quot;: &quot;^27.2.4&quot;,
    &quot;prettier&quot;: &quot;^2.4.1&quot;,
    &quot;ts-jest&quot;: &quot;^27.0.5&quot;,
    &quot;ts-node&quot;: &quot;^10.2.1&quot;
  }
}</code></pre><figcaption><p><span style="white-space: pre-wrap;">package.json</span></p></figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-json">{
  &quot;include&quot;: [&quot;src/**/*&quot;],
  &quot;compilerOptions&quot;: {
    // type checking
    &quot;exactOptionalPropertyTypes&quot;: true,
    &quot;noFallthroughCasesInSwitch&quot;: true,
    &quot;noImplicitOverride&quot;: true,
    &quot;noImplicitReturns&quot;: true,
    &quot;noPropertyAccessFromIndexSignature&quot;: true,
    &quot;noUnusedLocals&quot;: true,
    &quot;noUnusedParameters&quot;: true,
    &quot;strict&quot;: true,
    // module
    &quot;module&quot;: &quot;ES2020&quot;,
    &quot;moduleResolution&quot;: &quot;node&quot;,
    &quot;rootDir&quot;: &quot;src&quot;,
    &quot;resolveJsonModule&quot;: true,
    // emit
    &quot;outDir&quot;: &quot;dist&quot;,
    // imterop constraints
    &quot;allowSyntheticDefaultImports&quot;: true,
    &quot;forceConsistentCasingInFileNames&quot;: true,
    // language and environment
    &quot;lib&quot;: [&quot;ES2020&quot;],
    &quot;target&quot;: &quot;ES2020&quot;,
    // completeness
    &quot;skipLibCheck&quot;: true
  }
}
</code></pre><figcaption><p><span style="white-space: pre-wrap;">tsconfig.json</span></p></figcaption></figure><p>My problems started immediately. I was importing from <code>fs-extra</code> as such:</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import { copySync, readdirSync, readFile, readFileSync, writeFile } from &apos;fs-extra&apos;;
import { render } from &apos;mustache&apos;;
...</code></pre><figcaption><p><span style="white-space: pre-wrap;">src/commands.ts</span></p></figcaption></figure><p>This immediately started failing. The reason is that I need to write my imports exactly the way I want it to be in the compiled JavaScript files. So, the correct way to do it would be:</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import mustache from &apos;mustache&apos;;
import fs from &apos;fs-extra&apos;;

const { copySync, readdirSync, readFile, readFileSync, writeFile } = fs;
const { render } = mustache;</code></pre><figcaption><p><span style="white-space: pre-wrap;">src/commands.ts</span></p></figcaption></figure><p>Moreover, during importing another module, one needs to use the <code>.js</code> extension. For eg.</p><figure class="kg-card kg-code-card"><pre><code class="language-typescript">import { Commands } from &apos;./commands.js&apos;;</code></pre><figcaption><p><span style="white-space: pre-wrap;">src/index.ts</span></p></figcaption></figure><p>This makes my skin crawl. JavaScript is already a meme and this doesn&apos;t help the JS/TS ecosystem in any way. Moreover, <code>.eslintrc.js</code> files no longer work cause ESLint still uses CommonJS to load and since we are using <code>&quot;type&quot;: &quot;module&quot;</code> in our <code>package.json</code>, the whole project is now ES Module&apos;d. And although ES Module projects can call CommonJS modules, the other way round is not possible. So, you either gotta rename the ESLint config to <code>.eslintrc.cjs</code> or change the format to JSON. Similar actions may need to be taken for other libraries that uses a JS config file.</p><p>It was OK, till now but the final nail hit in the coffin when I found out that Jest would not work. No matter I used a JSON or a CJS file, my tests wouldn&apos;t run at all and would give a module error. I was planning to use <a href="https://github.com/kulshekhar/ts-jest?ref=sayakm.me">ts-jest</a> and my configuration looked like:</p><figure class="kg-card kg-code-card"><pre><code class="language-javascript">module.exports = {
  preset: &apos;ts-jest&apos;,
  testEnvironment: &apos;node&apos;
};
</code></pre><figcaption><p><span style="white-space: pre-wrap;">jest.config.js</span></p></figcaption></figure><p>Now I don&apos;t know if I missed something or was messing something up on my part but I had already spent around 2 days working on scaffolding the library which would scaffold TypesScript project so that I could scaffold my project which scaffolds a Discord project. I threw my arms in the air, changed the config to use CommonJS and started writing this blog. Maybe I will have an easier time in a year or so.</p><p>Look forward to more posts from me regarding my experiences in working with my projects.</p><p>This blog post is not meant to be a tutorial on how to (or how not to) use ES Modules with TypeScript. Its meant to document my experiences and take it as such. But my situation might resonate with someone who have gone through a similar situation.</p>]]></content:encoded></item><item><title><![CDATA[Start of a new blog]]></title><description><![CDATA[<p>Well, so here I am. After working professionally over 5 years and 4 more as an engineer, I have finally decided to finally put in some effort into maintain a blog. I have been mulling over this for quite a while and finally bit the bullet.</p><p>I have tried starting</p>]]></description><link>https://sayakm.me/start-of-a-new-blog/</link><guid isPermaLink="false">65d0b550bc9457aff61c8ec6</guid><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Sayak Mukhopadhyay]]></dc:creator><pubDate>Fri, 01 Oct 2021 08:47:46 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1505682634904-d7c8d95cdc50?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fHR5cGV3cml0ZXIlMjBwaWN0fGVufDB8fHx8MTcwODE5NjE4NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1505682634904-d7c8d95cdc50?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fHR5cGV3cml0ZXIlMjBwaWN0fGVufDB8fHx8MTcwODE5NjE4NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Start of a new blog"><p>Well, so here I am. After working professionally over 5 years and 4 more as an engineer, I have finally decided to finally put in some effort into maintain a blog. I have been mulling over this for quite a while and finally bit the bullet.</p><p>I have tried starting a blog multiple times. Most of the times I have focused on a writing a type of blog like and software development blog but I soon realised that I wanted to write more of my thoughts than just programming tips. So, I purchased a non-bloggy domain and setup another blog. Yeah, I have another blog at <a href="https://cmdrgarud.blog/?ref=sayakm.me">https://cmdrgarud.blog/</a>. Its a bit of a role playing blog so it might not be for everyone.</p><p>Also, I had to choose a CMS for writing my blog. Initially I had pretty much decided to go with wordpress but my years of great experience with <a href="https://ghost.org/?ref=sayakm.me">ghost</a> with my other blog made me make the switch. Things are much simpler and it makes it easier for me to focus on the content and not keep on making endless tweaks as I did with wordpress. Give ghost a try if you are looking for a no-frills CMS.</p><p>I think I have rambled along for long enough. Its time for me to get back to work. Do look forward for more blog posts. Until then, o7.</p>]]></content:encoded></item></channel></rss>