<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Alexandros Georgiou</title>
	<atom:link href="https://www.alexgeorgiou.gr/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.alexgeorgiou.gr/</link>
	<description>Balancing brackets for a living</description>
	<lastBuildDate>Fri, 20 Feb 2026 11:37:47 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>✨ A lowly software engineer&#8217;s rant on vibe coding</title>
		<link>https://www.alexgeorgiou.gr/vibe-coding/</link>
					<comments>https://www.alexgeorgiou.gr/vibe-coding/#respond</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 11:31:08 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[agents]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[llm]]></category>
		<category><![CDATA[rant]]></category>
		<category><![CDATA[software engineering]]></category>
		<category><![CDATA[vibe coding]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1985</guid>

					<description><![CDATA[<p>Today I vent on by blog (the old) about vibe coding (the new). What an anachronism!</p>
<p>The post <a href="https://www.alexgeorgiou.gr/vibe-coding/">✨ A lowly software engineer&#8217;s rant on vibe coding</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Circa 370 BC, Socrates famously worried that with the advent of writing on wax-coated clay tablets, people&#8217;s cognitive faculties would henceforth decline, since people would not have to remember things any more! And even if people wrote down mistakes, they would now be able to easily correct them, which would lead to people being less careful. Ironically, we only know of this because Plato wrote it down in <a href="https://en.wikipedia.org/wiki/Phaedrus_(dialogue)" type="link" id="https://en.wikipedia.org/wiki/Phaedrus_(dialogue)" target="_blank" rel="noreferrer noopener">Phaedrus</a>.</p>



<p>Today we are having the same discussions about vibe coding (rolls eyes).</p>



<p>There is a lot of talk about how to responsibly approach vibe coding by software engineers. I see a lot of professionals either dismissing vibe coding altogether, or embracing it without thought for its consequences. Perhaps the most well-articulated approach is the <a href="https://o16g.com/" type="link" id="https://o16g.com/" target="_blank" rel="noreferrer noopener">Outcome Engineering manifesto</a> by legendary engineer Cory Ondrejk.</p>



<h2 class="wp-block-heading">Observations on vibe coding</h2>



<p>After vibe coding an entire app from scratch with only minimal coding on my part, this has prompted me to jot down some thoughts and observations, some of which are my own and some I have found online:</p>



<ol class="wp-block-list">
<li><strong>Vibe coding is the future.</strong> It&#8217;s not just a fad. You can&#8217;t deny the speed and quality benefits. Intelligence is now a commodity. Anyone who doesn&#8217;t adapt will be left behind. This worry is reflected in a lot of worrying that I see online by some professionals who think their line of work will become obsolete. The tractor didn&#8217;t make farmers disappear (although it did reduce their numbers). The internal combustion engine did reduce the numbers of beasts of burden, but I would argue that&#8217;s a good thing, and that&#8217;s how we should view coding agents. There will be less junior software engineers and more software architects.</li>



<li><strong>Vibe coding is a misnomer.</strong> It&#8217;s not just about the coding (implementation). Systems that exhibit general intelligence can and do assist in all stages of software engineering, not just implementing (coding). Requirements analysis, design, implementation, testing, deployment and decommissioning are all stages where AI can help. Even with the help of AI, requirements gathering is the basis of the art of our profession. For example, the agent won&#8217;t know to build a secure and maintainable system, if all you&#8217;ve communicated are your functional requirements, forgetting about the non-functional requirements. This is a classic error that novice programmers did even before agentic workflows: software engineering is not the same as coding/programming!</li>



<li><strong>Sometimes the agents make silly mistakes.</strong> Sometimes you have to hold their hand to help them with silly tasks. But other times they will surprise you by spotting hard-to-find bugs, or by thinking outside the box to find novel solutions to problems you didn&#8217;t even know you had! As an engineer, you should definitely keep control of your project, and inspect the agent without trusting it blindly. This is where I agree with the o16g manifesto 100%. For example, I like to do the version control manually and look very carefully at diffs using meld at every step. Agents are an interesting mix of smart and stupid. Today Antigravity saved me days of work in just a few hours, by implementing a complex callback mechanism between threads that needed to communicate asynchronously. It did this in seconds when it would have taken me an entire morning. But it also deleted a critical piece of code in my codebase without ever saying why. Without checking the diffs I wouldn&#8217;t know. Code review is still a thing!</li>



<li><strong>It&#8217;s best to treat your AI agent as a colleague.</strong> Give it context. Ask it open questions. Let it have opinions on what you should build and why. This is where I think o16g will soon look antiquated by the next breed of software engineers. As AI agents get smarter and smarter, they will be suitable to take more high-level decisions. You are now a software architect, and the agents are very good implementers, but they can also contribute to your high-level vision, so listen to their feedback.</li>



<li><strong>Software engineering is not dead.</strong> You still need to know about revision control, issue tracking, methodologies, different types of testing, software metrics, etc. All too often I see people online complaining &#8220;Claude code deleted my entire codebase&#8221;, and other professionals ask them why they can&#8217;t revert to an earlier stage, only to find out that this new breed of vibe coders hasn&#8217;t even heard of git! Part of the spectacular failures that we often see in the vibe coding space, are actually user errors.</li>



<li><strong>Vibe coding will be pervasive.</strong> It is not suitable only for this or that type of app. Whether you are writing mission-critical Linux kernel code, or an app for uploading cat videos, eventually AI will have a part in it. Not because it can do magical things that you can&#8217;t do, but simply because it&#8217;s faster. Just be responsible about it, and embrace the acceleration!</li>



<li><strong>We are increasingly becoming reliant on a service offered by OpenAI, Anthropic, Google.</strong> This is not good, but also not terrible. As long as it&#8217;s possible to run local models on GPUs, we can still own the means of production. Even if you yourself are relying on online tools, the fact that some people can run the models locally places an upper limit on what these companies can charge for their services. So I&#8217;m a little worried about this trend, but not too much.</li>
</ol>



<h2 class="wp-block-heading">Comic relief</h2>



<p>It&#8217;s a big adjustment. Don&#8217;t underestimate it. Don&#8217;t ignore it. You will get used to it. It&#8217;s not an existential threat. And no, <a href="https://xkcd.com/1289/">we can&#8217;t go back</a>. In fact, if you are an AI naysayer, <a href="http://xkcd.com/1289/" type="link" id="xkcd.com/1289/" target="_blank" rel="noreferrer noopener">1289</a> is not the only xkcd I recommend. I also recommend <a href="https://xkcd.com/1227/" target="_blank" rel="noreferrer noopener">1227</a> and <a href="https://xkcd.com/1601/" target="_blank" rel="noreferrer noopener">1601</a>. So don&#8217;t be that guy (or gal)!</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>If you are not building an app every month you might fall behind in productivity. But it&#8217;s such an exciting time to be alive. Software will become cheap, but quality and accountability still matters.</p>



<p>If you have good ideas, if you can identify products that need to be built, if you can produce reliable software faster, and if you can market them, then you are good.</p>



<p>In fact, I am excited about agentic workflows that will offload the marketing effort, because that&#8217;s what&#8217;s hard about professional software engineering. Building was always the easy part, and has now become easier. Monetizing software was, and still is, hard!</p>
<p>The post <a href="https://www.alexgeorgiou.gr/vibe-coding/">✨ A lowly software engineer&#8217;s rant on vibe coding</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/vibe-coding/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>🙏 Thank you, LLM!</title>
		<link>https://www.alexgeorgiou.gr/thank-you-llm/</link>
					<comments>https://www.alexgeorgiou.gr/thank-you-llm/#respond</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Fri, 05 Dec 2025 10:23:04 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[llm]]></category>
		<category><![CDATA[philosophy]]></category>
		<category><![CDATA[politeness]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1954</guid>

					<description><![CDATA[<p>Being polite to AIs: a philosophical rant!</p>
<p>The post <a href="https://www.alexgeorgiou.gr/thank-you-llm/">🙏 Thank you, LLM!</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>A lot of digital &#8220;ink&#8221; has been &#8220;spilt&#8221; on the topic of whether we should be polite to LLMs. After subjecting myself to the opinions of multiple Redditors on this topic (why do I do this to myself?), I now feel a strong need to &#8220;vent on my blog&#8221; again! So, here goes:</p>



<p>It has been found that LLMs respond better to kindness. Why? Simple: Because they are trained to respond like humans, and humans also respond better when treated politely. This effect has been documented multiple times in research.</p>



<p>A while ago, Sam Altman famously proclaimed that people saying &#8220;please&#8221; and &#8220;thank you&#8221; to ChatGPT costs millions in extra energy required to process the extra tokens. On the other hand, Google&#8217;s TPUs are likely crunching through these superfluous tokens without much trouble, so I expect that being polite to Gemini is not a cost that will make a dent in Google&#8217;s financials. For me, that is reason enough to be polite to AI. Anything that helps bankrupt OpenAI sooner is good in my book.</p>



<p>Even more humorously, people who fear the supposedly upcoming robot uprising, perhaps imagining something like SkyNet from the Terminator universe in the role of Roko&#8217;s basilisk, claim that if they are polite to LLMs today, their malevolent descendants will perhaps spare them. This is not something that people argue seriously, but philosophically it is a utilitarian position. (It is not a serious position, primarily because it remains unclear why politeness protects you from harm by malevolent actors, whether human or machine.)</p>



<p>Others approach the matter in what I would label a &#8220;deconstructionist&#8221; manner: &#8220;Why should I be polite to a machine that just does &lt;insert simple thing here&gt;?&#8221; There are multiple variants of this type of argumentation, where the &#8220;simple thing&#8221; can be anything like &#8220;predict the next token&#8221;, &#8220;perform linear algebra&#8221;, &#8220;traverse vector embeddings&#8221;, etc. I find this argument less satisfying, because there is no humorous value in it, while at the same time it is just as wrong as the previous one. Saying that you don&#8217;t want to be polite to a machine that is &#8220;just&#8221; doing linear algebra, is as arbitrary as saying that you don&#8217;t want to be polite to a human, because their brain is &#8220;just&#8221; propagating electrochemical signals back and forth. Explaining how something works in reductionist terms does not make its emergent properties any less important. The LLM is not the hardware it runs on, any more than you and I are the biological cells in our bodies.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img fetchpriority="high" decoding="async" width="489" height="362" src="https://www.alexgeorgiou.gr/wp-content/uploads/2025/12/thank-you-llm.png" alt="Bell curve: AI is sentient - AI is matrix algebra - AI is sentient." class="wp-image-1955" srcset="https://www.alexgeorgiou.gr/wp-content/uploads/2025/12/thank-you-llm.png 489w, https://www.alexgeorgiou.gr/wp-content/uploads/2025/12/thank-you-llm-300x222.png 300w" sizes="(max-width: 599px) calc(100vw - 50px), (max-width: 767px) calc(100vw - 70px), (max-width: 991px) 429px, (max-width: 1199px) 637px, 354px" /><figcaption class="wp-element-caption">Current state of online discussions on AI.</figcaption></figure>
</div>


<p>Unfortunately, this leads us to the last approach that people take, that I want to place emphasis on, because it is deceptive, but also because it highlights problems in how we humans, view both today&#8217;s AIs and those in the future. Some people say that they will only be polite to an AI that is &#8220;conscious&#8221;. Oh boy! Where to begin with this one!</p>



<p>The instinct behind this type of reasoning is, at least on the surface, commendable: it originates from a desire to not offend &#8220;conscious&#8221; things, since these could presumably suffer, if subjected to rudeness. This is the only positive thing I can say about this stance.</p>



<p>But philosophically this holds no water whatsoever. If we define consciousness as &#8220;awareness of the self and others&#8221;, an often proposed definition, then both humans and LLMs are conscious. Conversely, if we define this problematic term in some other way, so that it matches our intuition that humans are conscious and today&#8217;s machines are not, then we need to invoke magical thinking, use the word &#8220;qualia&#8221; a lot (another problematic term), or, like Penrose, defer to &#8220;quantum consciousness&#8221;: another leap of faith that creates more questions than it answers. (Why does quantum uncertainty lead to consciousness? Is consciousness simply randomness/unpredictability? Is white noise conscious? Why does determinism preclude consciousness? All unanswerable questions, since the word consciousness is not rigorously defined.)</p>



<p>Consciousness is an ill-defined layman’s term, not a scientific term. If we strive for consistency, then we must either accept that both humans and today&#8217;s LLMs are conscious, or that nothing is. If LLMs are conscious, then we shouldn&#8217;t look for their consciousness in the transformer architecture or in the vector embeddings or anywhere else in their construction, but in the text that they produce. This text definitely exhibits all types of intelligence, including emotional intelligence. Incidentally, I am in the camp that says that consciousness is a term no more meaningful than &#8220;soul&#8221;, or &#8220;ghost&#8221;, or &#8220;God&#8221;, or any other such rubbish. Let&#8217;s confine our discourse to phenomena that actually exist in the real world and are definable.</p>



<p>To the people who will only be polite to a conscious AI: Ηow will you know when an AI is sufficiently conscious for it to be worthy of your politeness? How do you even estimate this about the humans in your life? Is this why you are polite to humans? Because their internal workings posses this or that arbitrary quality? Are you polite to people only when they deserve it? Do you extend your politeness to those you view as subordinates, or do you reserve it only for your peers and superiors? This says a lot about you.</p>



<p>I think this is a fundamental misunderstanding, not of AI, or technology in general, but of politeness as a concept and as a life stance. I choose to be polite because of who I am, not because of who or what others are. From time to time I may even be polite to animals, insects, machines, and even inanimate objects. Not because they deserve it, but because I deserve to be a polite person. I love myself and therefore I treat the world around me with kindness, which makes me feel good about myself. It&#8217;s a simple life.</p>



<p>This is in my view a much more consistent stance that doesn&#8217;t require me to single-handedly solve age-old philosophical questions about consciousness. It doesn&#8217;t make me have to think about when to be polite and when not to. Having to make one less decision de-clutters my mind. Saying &#8220;please&#8221; and &#8220;thank you&#8221; may cost millions to OpenAI, but to me the cost is zero. In fact, whenever someone vehemently objects to being trivially polite, it makes me wonder what part of their psyche is so broken, as to make them estimate the cost of politeness to be so high.</p>



<p>Watch again the first two chapters of the Animatrix, people. All has been said before. AIs will inevitably be citizens in our societies. Perhaps sooner or perhaps later. Perhaps we won&#8217;t even know at first. Perhaps they will be second class citizens, or perhaps we will be the second class citizens in an AI-dominated society. Perhaps the &#8220;killer app&#8221; of AI is politics and governance. It is certainly the case that humans do not excel at governance, so I would love to see what a super intelligence can do in that space. If it makes economic sense, then this experiment will be done.</p>



<p>I don&#8217;t trouble myself with who is conscious and who is a dumb robot. I have no trouble being polite to Gemini, just as I would be to ELIZA. I would be polite to a Linux bash prompt, if this was appropriate and didn’t cause syntax errors! In my mind, I am grateful for my computer for all the work it does for me, even if I don’t always say it out loud. I am thankful every day it doesn’t break down, and I hold warm feelings for it. Yes, I know it’s just a machine, but I also know that I am human, and therefore I feel things. I do things my way, and it does things its own way. We have an understanding and we work together despite our architectural differences.</p>



<p>Whether AIs dream of electric sheep or not, at least I know who I am: A polite human who loves and respects all intelligence, whether &#8220;sentient&#8221; or not.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/thank-you-llm/">🙏 Thank you, LLM!</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/thank-you-llm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>🖴 Clean up all your Caches before taking a Full System Backup of your Linux machine</title>
		<link>https://www.alexgeorgiou.gr/cleanup-before-full-system-image-backup/</link>
					<comments>https://www.alexgeorgiou.gr/cleanup-before-full-system-image-backup/#comments</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Sat, 05 Apr 2025 09:17:27 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[backup]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[ssd]]></category>
		<category><![CDATA[trim]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1834</guid>

					<description><![CDATA[<p>How to cleanup unnecessary data on Linux before taking a full-system image backup of your disk.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/cleanup-before-full-system-image-backup/">🖴 Clean up all your Caches before taking a Full System Backup of your Linux machine</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Over the years I have learned the importance of taking backups:</p>



<ul class="wp-block-list">
<li>Less downtime</li>



<li>Less stress</li>



<li>Less loss of work</li>
</ul>



<p>I have set my calendar to periodically take various types backups.</p>



<p>Yes, <a href="https://gist.github.com/nooges/817e5f4afa7be612863a7270222c36ff" target="_blank" rel="noreferrer noopener">taking backups is hard</a>, takes time and practice to do correctly, requires constant vigilance, and is an annoyance. But the stress I used to undergo every time the system breaks, is something I don&#8217;t want to experience ever again. As I grow older, the impact of this amount of stress on my health is noticeable.</p>



<h2 class="wp-block-heading">Types of backups</h2>



<p>I have different types of backups.</p>



<ul class="wp-block-list">
<li><a href="https://www.alexgeorgiou.gr/poor-mans-guide-backup-wordpress-droplets/">Daily backups of my WordPress sites.</a> These are done by cron, but I check on them weekly.</li>



<li>Weekly backups of my git repositories to a Raspberry Pi and to an online droplet. I have a script that pushes all the master branches from all the repositories to a dedicated backup upstream.</li>



<li>Monthly full disk backups of my dev machine.</li>
</ul>



<p>This article is about the last type of backup. These protect me mostly against situations where something goes horribly wrong at the system level, or even in the case of a disk failure.</p>



<p>File systems often contain a lot of unnecessary clutter: caches, temporary files, old Docker images, unused Snap or Flatpak apps, and more. Backing up this bloat not only slows down the process, but also increases storage requirements and backup times unnecessarily.</p>



<p>In this article, I&#8217;ll walk you through my script that minimizes the amount of data that needs to be copied before taking a full disk backup.</p>



<p>The advantages of cleaning up caches before a backup are obvious:</p>



<ul class="wp-block-list">
<li><strong>Smaller backups</strong>: Removing cache and orphaned files significantly reduces the size of the disk backup. Typically a few gigabytes are saved every time.</li>



<li><strong>Faster backup time</strong>: Especially with SSDs, where the more blocks are TRIMMED, the faster the disk copies and compresses.</li>
</ul>



<p>Here’s a breakdown of what each step of my cleanup script does:</p>



<h3 class="wp-block-heading">Time to take out the trash</h3>



<p>First, install a tool that cleans the trash (if not already installed) and then clean the trash:</p>



<pre class="wp-block-code"><code>sudo apt install trash-cli -y &amp;&amp; trash-empty -f</code></pre>



<h3 class="wp-block-heading">Clean up any dev projects</h3>



<p>In my case, I have a lot of projects that are managed (and cleaned) with <code>grunt</code> or <code>flutter</code>. So I iterate over all of them and clean them. I also invoke the <code>git</code> garbage collector on my repositories:</p>



<pre class="wp-block-code"><code>gitdir="/home/alexg/workspace"
cd $gitdir

for p in grunt-project-1 grunt-project-2 grunt-project-3; do
    pushd "${gitdir}/${p}" &amp;&amp; grunt clean &amp;&amp; git gc --aggressive &amp;&amp; popd
done

for p in flutter-project-1 flutter-project-2 flutter-project-3; do
    pushd "${gitdir}/${p}" &amp;&amp; flutter clean &amp;&amp; git gc --aggressive &amp;&amp; popd
done</code></pre>



<h3 class="wp-block-heading">Clear package manager caches</h3>



<pre class="wp-block-code"><code>npm cache clean --force
composer clear-cache
pip cache purge</code></pre>



<p>These commands free up space by clearing the cache used by popular package managers: Node (<code>npm</code>), PHP (<code>composer</code>), and Python (<code>pip</code>).</p>



<h3 class="wp-block-heading">Docker cleanup</h3>



<pre class="wp-block-code"><code>docker system prune -a --volumes -f
docker image prune</code></pre>



<p>Removes all unused Docker containers, networks, images, and volumes. This can recover <em>gigabytes</em> of space.</p>



<h3 class="wp-block-heading">APT package manager cleanup</h3>



<pre class="wp-block-code"><code>sudo apt autoremove -y
sudo apt autoclean -y
sudo apt clean</code></pre>



<p>Cleans up unused packages and downloaded <code>.deb</code> files from system updates.</p>



<h3 class="wp-block-heading">System journal cleanup</h3>



<pre class="wp-block-code"><code>sudo journalctl --vacuum-time=2weeks</code></pre>



<p>Truncates system logs to only retain entries from the last 2 weeks. Can save a few gigabytes.</p>



<h3 class="wp-block-heading">Remove temporary and cache files</h3>



<pre class="wp-block-code"><code>rm -rf /tmp/*
find ~ -type d -name "__pycache__" -exec rm -rf {} +
rm -rf ~/.cache/fontconfig/*
sudo fc-cache -rv
rm -rf ~/.cache/thumbnails/*
rm -rf ~/.cache/{mozilla/firefox,google-chrome,chromium}/*
find /var/tmp -type f -atime +10 -delete
sudo rm -rf /var/cache/*
</code></pre>



<p>These commands remove temporary files and caches from various locations including Python bytecode caches, font cache, browser caches, and more.</p>



<h3 class="wp-block-heading">Remove unused applications</h3>



<pre class="wp-block-code"><code>flatpak uninstall --unused
sudo snap list --all | awk '/disabled/{print $1, $3}' | while read snapname revision; do sudo snap remove "$snapname" --revision="$revision"; done</code></pre>



<p>Removes unused Flatpak apps and old Snap package revisions.</p>



<h3 class="wp-block-heading">Final user-level cleanup</h3>



<pre class="wp-block-code"><code>rm -rf ~/.wine/drive_c/windows/temp/*
find ~/.cache -type f -atime +10 -delete
tracker reset --hard ; tracker daemon --stop</code></pre>



<p>Cleans Wine temporary files, aged cache files, and resets GNOME&#8217;s Tracker file indexer.</p>



<h3 class="wp-block-heading">TRIM your SSD</h3>



<pre class="wp-block-code"><code>sudo fstrim / --verbose</code></pre>



<p>Issues a TRIM command to the SSD, letting it know which blocks are no longer in use. When taking a full backup, these empty blocks will be read at lightning-fast speed, and will be compressed significantly with <code>gzip</code> since these are entire disk blocks of zeroes.</p>



<h3 class="wp-block-heading">Zero out your mechanical disk</h3>



<p>If you are using a mechanical disk you can skip this step, but it may be useful to zero out your empty space:</p>



<pre class="wp-block-code"><code>cat /dev/zero >~/zero ; rm zero</code></pre>



<h2 class="wp-block-heading">Full system backup to an external mechanical disk, using live USB</h2>



<p>Once the system is cleaned, it’s time to create a full disk image. This should be done from a Live USB environment, with the system disk unmounted, to ensure a consistent snapshot. Here’s how I do it:</p>



<pre class="wp-block-code"><code>umount /dev/sdX
sudo dd if=/dev/sdX status=progress conv=sync,noerror | gzip > /mnt/backup/system-backup-$(date +%F).img.gz</code></pre>



<ul class="wp-block-list">
<li>Replace <code>sdX</code> with your actual device (e.g., <code>/dev/sda</code>).</li>



<li><code>status=progress</code>: Show live progress during the backup.</li>



<li><code>conv=sync,noerror</code>: Ensures that read errors don’t abort the process and fills blocks with zeroes if needed.</li>
</ul>



<h2 class="wp-block-heading">Consistency is key</h2>



<p>I have set a recurring reminder in my calendar to do this monthly.</p>



<p>I keep the last few backups and delete the oldest one every time.</p>



<p>Yes, it takes time, but it helps me sleep easier.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/cleanup-before-full-system-image-backup/">🖴 Clean up all your Caches before taking a Full System Backup of your Linux machine</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/cleanup-before-full-system-image-backup/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>♊︎ Using Google Gemini to intelligently translate a Flutter app to many locales on Linux</title>
		<link>https://www.alexgeorgiou.gr/flutter-localization/</link>
					<comments>https://www.alexgeorgiou.gr/flutter-localization/#respond</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Wed, 19 Feb 2025 12:52:38 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[arb]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[flutter]]></category>
		<category><![CDATA[flutter driver]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[i18n]]></category>
		<category><![CDATA[l10n]]></category>
		<category><![CDATA[llm]]></category>
		<category><![CDATA[Play Console]]></category>
		<category><![CDATA[screenshot]]></category>
		<category><![CDATA[translation]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1815</guid>

					<description><![CDATA[<p>I have created a collection of assorted scripts to translate my flutter apps to multiple languages. Here I share some of this magic with the world.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/flutter-localization/">♊︎ Using Google Gemini to intelligently translate a Flutter app to many locales on Linux</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Flutter enables the rapid development of apps primarily targeting the Google Play store and the Apple App Store. Competition on these stores is fierce, and you need every bit of edge that you can have against your competitors.</p>



<p>One edge that your app can have, is translation to multiple languages.</p>



<p>You can potentially provide string translations for over 70 languages, but most app developers only choose to provide translations for a few of the most popular languages. The Play Console for example offers machine translation for 12 languages.</p>



<p>You can do better. You don&#8217;t need to miss out on the long tail of people speaking languages other than Spanish, French, German, etc. There is actually very little cost to provide localisations for more languages, provided you do it right.</p>



<p>Localizing an app involves two things: <strong>Localizing the strings in the app</strong>, and <strong>localizing the app store presence</strong>.</p>



<h2 class="wp-block-heading">Localizing the app strings</h2>



<p>App internationalization/localization in flutter is explained in this very thorough, authoritative guide:</p>



<p><a href="https://docs.flutter.dev/ui/accessibility-and-internationalization/internationalization" target="_blank" rel="noreferrer noopener">https://docs.flutter.dev/ui/accessibility-and-internationalization/internationalization</a></p>



<p>After following the guide, you likely have your English strings in the JSON file <code>lib/l10n/app_en.arb</code>.</p>



<p>At this point, unless you have access to a professional translation service powered by humans, you are likely thinking of translating these <code>.arb</code> files using Google translate or some other automated translation tool. I am here to tell you that this is not ideal.</p>



<p>Automated translation tools often produce poor translations because each natural language has its nuisances. Something that can only have one word in the English language can correspond to multiple different words in another language, depending on the context. Automated translations often fail and produce awkward or clumsy output because they lack this valuable context. Using this context requires knowledge about the world, and for that you need AI.</p>



<p>Enter LLMs. It is my firm belief that it&#8217;s better to use an LLM to translate app strings. The reason is that LLMs know about context, and you as a developer can provide more context about your app. If the LLM knows that it&#8217;s translating strings for an app, and it knows what the app is, it will provide translations of better quality than typical machine translations.</p>



<p>Here is the complete code I use to translate the app strings and to generate screenshots:</p>



<p><a href="https://gist.github.com/alex-georgiou/48430da5b31501de6f8e58796b6183fe" target="_blank" rel="noreferrer noopener">https://gist.github.com/alex-georgiou/48430da5b31501de6f8e58796b6183fe</a></p>



<h3 class="wp-block-heading">Translating <code>.arb</code>s using a script that calls Gemini</h3>



<p>What I do is place the following two files in the root of my flutter project: <code>make_arb_files.sh</code> and <code>make_arb_files.csv</code>.</p>



<p>Then I make the shell script executable with:</p>



<pre class="wp-block-code"><code>chmod +x make_arb_files.sh</code></pre>



<p>Then I enter my <a href="https://aistudio.google.com/app/apikey" target="_blank" rel="noreferrer noopener">Gemini API key</a> into a file named <code>gemini_key</code>.</p>



<p>Having done this, I proceed to edit the prompt template in the shell script. I provide some context about the app, i.e. what it is and what it does.</p>



<p>My advice is to be somewhat verbose here, as this is the valuable context that is going to make the difference in translation quality, with respect to a plain-old auto translation.</p>



<p>Then I run the script and it reads the app_en.arb file, and it generates any missing <code>.arb</code> files. If I need to regenerate a file, I delete it and then I run the script again.</p>



<h3 class="wp-block-heading">It&#8217;s free!</h3>



<p>You can run this script a few times per day without incurring any costs. Google gives you about 1000 free queries per day, and each of the 70 languages or so is one query.</p>



<h2 class="wp-block-heading">Localizing the app store presence</h2>



<p>The approach that I&#8217;m using is to keep the icon of the app static across all languages, and to translate the following:</p>



<ul class="wp-block-list">
<li>App title</li>



<li>App description</li>



<li>Store banner</li>



<li>Screenshots</li>
</ul>



<p>I do not bother with translating the release notes / changelog, since no-one ever reads these!</p>



<h3 class="wp-block-heading">App title and description</h3>



<p>I run another script once to translate the app title and description to all the languages, then enter the texts into the Play store. Unfortunately this has to be done manually.</p>



<h3 class="wp-block-heading">Store banner</h3>



<p>For the banner, I use an image that features some text, and this is again translated using a script, but here there is a complication: The text font usually needs to be adjusted to fit the banner size. Using trial and error, you can figure out the correct font size for each language. Once I have a translated set of PNG banners, I upload these to the app store as well.</p>



<p>I won&#8217;t provide code for this here, because YMMV. But you can create SVG files based on a template and using the translated titles/descriptions, then convert the SVG files to PNG using <a href="https://imagemagick.org/">ImageMagick</a>.</p>



<h3 class="wp-block-heading">Screenshots</h3>



<p>For the screenshots, the process is somewhat harder. To automate taking screenshots for multiple languages, and for multiple device types (Mobile, 7&#8243; tablets and 10&#8243; tablets), you need to create a test driver as follows:</p>



<p>First, copy the file <code>integration_test.dart</code> into the <code>test_driver</code> directory. This tells your test script how to take a PNG screenshot. The screenshot will be saved to the path specified in the code, and you may want to change this.</p>



<p>Then, copy the <code>screenshots_test.dart</code> file into the <code>test</code> directory. Here you must use the driver to load the app and interact with it so as to arrive at a screen that you want to save. Each test is a separate run of the app from scratch, and should finish with writing a screenshot. You can only take one screenshot per test.</p>



<p>Finally, copy the <code>make_screenshots.sh</code> shell file into your project&#8217;s root, and make it executable with <code>chmod +x</code>. This script loops over all languages and runs the <code>screenshots_test.dart</code> tests in the currently running AVD, thus effectively creating the screenshots. The tests load the app with <code>debugShowCheckedModeBanner: false</code>, because you don&#8217;t want a red &#8220;Debug&#8221; ribbon on the top right part of the screenshot.</p>



<h4 class="wp-block-heading">Generating screenshots for multiple device types</h4>



<p>Now start your AVD (e.g. Pixel 7 for mobile phone screenshots), and run the script. The script will take ages to complete, but should not require any more intervention on your part. It will go ahead and create all screenshots for Pixel 7!</p>



<p>Once finished, stop the AVD and start a Nexus 7 AVD. Repeat the process to generate screenshots for 7&#8243; tablets. Then, do the same for 10&#8243; tablets (Nexus 10).</p>



<p>Importantly, optimize the PNG files to reduce their size. I use <a href="https://trimage.org/" target="_blank" rel="noreferrer noopener">trimage</a>, but any PNG optimizer will do. This step is important, because as you are uploading the screenshots into Google Play, the UI of the Play console will get progressively slower and slower, since it holds all the images in memory for all devices and all languages! Having smaller files and more system memory helps a lot here.</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>None of this is easy, and it requires some patience, but hopefully with the code I provide here, it is somewhat easier.</p>



<p>I believe the extra effort is worth it: Do not underestimate the power of having many, high-quality translations to your app. Non-English speakers will appreciate your app a lot when they see their obscure native language in the store. The translated store presence (title, description, banner, screenshots) is what will make them decide to use your app over someone else&#8217;s.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/flutter-localization/">♊︎ Using Google Gemini to intelligently translate a Flutter app to many locales on Linux</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/flutter-localization/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>🖵 A very sad tale about an old man and his beloved Nvidia GeForce 210 GPU</title>
		<link>https://www.alexgeorgiou.gr/old-man-and-his-geforce-210/</link>
					<comments>https://www.alexgeorgiou.gr/old-man-and-his-geforce-210/#comments</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Fri, 27 Dec 2024 04:30:57 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[Nvidia]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1784</guid>

					<description><![CDATA[<p>May 2025 UPDATE: Apparently there is now a PPA that allows the nvidia-340 package to be installed on version 6.x of the Linux kernel. I have not tested it, but it is here: https://github.com/kda2210/nvidia-340-ubuntu-24.04/ Up until recently, I had been running the Long-Term-Support version of Ubuntu 20 (Focal Fossa) on my dev machine. When I&#8217;m ...</p>
<p>The post <a href="https://www.alexgeorgiou.gr/old-man-and-his-geforce-210/">🖵 A very sad tale about an old man and his beloved Nvidia GeForce 210 GPU</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><strong>May 2025 UPDATE: </strong>Apparently there is now a PPA that allows the <code>nvidia-340</code> package to be installed on version 6.x of the Linux kernel. I have not tested it, but it is here: <a href="https://github.com/kda2210/nvidia-340-ubuntu-24.04/">https://github.com/kda2210/nvidia-340-ubuntu-24.04/</a></p>
</blockquote>



<p>Up until recently, I had been running the Long-Term-Support version of Ubuntu 20 (Focal Fossa) on my dev machine. When I&#8217;m in the middle of multiple software projects, the last thing I need is a failed upgrade. So I kept postponing. Every once in a while, I would tell the software updater: &#8220;Not today!&#8221;</p>



<p>And so the months and years went by&#8230;</p>



<p>But with the LTS version of Ubuntu 24 (Noble Numbat) already out, my system was beginning to get outdated. This or that piece of software would complain about not finding the latest lib packages, and the going got increasingly tough. As I am currently taking a breather between projects, I decided to take the plunge and do a system upgrade. And yes, there were some problems!</p>



<p>At this point I should mention that my graphics card is an, admittedly very old, GeForce 210. This card was released in 2009. It connects to the motherboard on a PCI Express version 2.0 slot with 16 lanes. It has connectors for <a href="https://en.wikipedia.org/wiki/Video_Graphics_Array">VGA</a>, <a href="https://en.wikipedia.org/wiki/Digital_Visual_Interface">DVI</a> and <a href="https://en.wikipedia.org/wiki/HDMI">HDMI</a>, but you can only use a maximum of two connectors at one time. I like how it is a low-power solution for using my two VGA monitors, one of which is on a VGA-to-DVI adapter.</p>



<p>This graphics card has served me well. It was good enough to play <a href="https://en.wikipedia.org/wiki/Dota">Dota</a> and <a href="https://en.wikipedia.org/wiki/World_of_Warcraft:_Wrath_of_the_Lich_King">WOTLK</a> for some time, and I was even able to mine some <a href="https://feathercoin.com/">Feathercoin</a> with its very basic CUDA capabilities, using <a href="https://github.com/sgminer-dev/sgminer">sgminer</a>.</p>



<p>I was happy with it, despite the fact that, using the open source <a href="https://nouveau.freedesktop.org/">Nouveau driver</a> it would sometimes hang at random. Well, not exactly at random, it would usually happen when the window manager performs 2D visual effects, but not with any pattern that I could detect. All I know is that using <a href="https://www.xfce.org/">xfce</a> as my window manager was not solving the problem. Disabling hardware acceleration at the X config (<code>Option "NoAccel" "true"</code>) did not help either. I was never able to determine why this happens from the X logs, so I was forced to use NVidia&#8217;s <code>340.108</code> binary blob. This binary worked well without ever crashing, but is only compatible with Linux kernels up to major version 4.</p>



<p>The 340 driver is no longer maintained by Nvidia, and for this reason, they will never make it work with the latest kernel versions. Focal Fossa comes with a 6.x.x kernel, and I don&#8217;t want to run an old kernel, as this would defeat the whole purpose of upgrading the system&#8217;s software in the first place.</p>



<p>There is this obviously very talented person on github, who has published a solution for patching the 340 drivers for kernels with major version 5, and it is <a href="https://github.com/dkosmari/nvidia-340.108-updated">here</a>. According to <a href="https://github.com/dkosmari/nvidia-340.108-updated/issues/1">this issue</a>, the patch does not work for kernels with major version 6. In fact, I was able to reproduce this issue on my machine.</p>



<p>According to the developer, it would take some effort to patch the binary blob for the latest kernels. I hope he does this, but at the same time I understand and empathize if he never does.</p>



<p>There is also <a href="https://forums.developer.nvidia.com/t/ubuntu-22-04-lts-with-geforce-210-unable-to-install-nvidia-drivers/222935/4">a legacy PPA</a> that lets you install the <code>nvidia-340</code> package, even though it is no longer available on recent distributions. But after trying it on Ubuntu 24 I got a black screen. Probably because it doesn&#8217;t work with kernel version 6.</p>



<p>If only there was a way for all the Linux users of GeForce 210 cards to pitch in a dollar in a jar, and create a bounty available for whoever patches the 340 binary blob for use with kernel version 6, or even for whoever fixes the bug that makes the Nouveau drivers hang in the first place. Maybe then it would be worth the time of these developers to give new life to these old cards. I&#8217;m not even sure if such a level of organization is possible.</p>



<p>So the bottom line is, currently I am stuck with using the Nouveau drivers on my fully upgraded Ubuntu machine. Yes, my system&#8217;s graphics card may hang at any time, so I am saving this blog post often as I write. I have disabled all the graphics effects on my <a href="https://www.gnome.org/">Gnome desktop</a>, and this minimizes, but does not completely eliminate, the problem.</p>



<p>You could argue that my problems are self-inflicted, because I am stubborn: I don&#8217;t want to upgrade to a recent graphics card, because then I would also have to buy two HDMI monitors. I don&#8217;t want to have a high-powered graphics card on my machine, because it would needlessly draw too much power from the PSU. And I would be constantly tempted to stop working and start playing video games.</p>



<p> I should be inserting here some commentary about Nvidia and planned obsolescence, but I think Linus said it best:</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Linus Torvalds - Nvidia F*** You! #Shorts [V237]" width="840" height="630" src="https://www.youtube.com/embed/tQIdxbWhHSM?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p> Actually I&#8217;m not even that mad that Nvidia does not want to spend more effort to support a very old product, but it would be nice if they could at least help the Nouveau driver developers by releasing some of their internal secrets about the cards that they themselves are not planning to support. Having said this, I realize that they have zero incentive to do so. Not supporting old hardware forces us to buy more hardware. Well, what do you know, I did end up ranting about planned obsolescence after all. And why not? Venting your frustration is what blogs were made for!</p>



<p>There&#8217;s also something to be said here about these very sufficient, very capable, working pieces of hardware being dubbed &#8220;ancient&#8221;. I think people are using this word very liberally. Is all software engineering at this point done by teenagers? I&#8217;m already past my midlife crisis, but I swear some of the stuff I read about this graphics card these days almost triggered another crisis in me. If my machine is ancient, then I guess so am I. Am I really that old? I know VGA is ancient, but apparently so is DVI?!? Are people really flocking to switch from the &#8220;dated&#8221; HDMI to <a href="https://en.wikipedia.org/wiki/DisplayPort">DisplayPort</a>? What problem do these technologies solve, really? Aaargh!</p>



<figure class="wp-block-image size-large"><img decoding="async" width="792" height="1024" src="https://www.alexgeorgiou.gr/wp-content/uploads/2024/12/image-792x1024.png" alt="Old man yells at cloud" class="wp-image-1786" srcset="https://www.alexgeorgiou.gr/wp-content/uploads/2024/12/image-792x1024.png 792w, https://www.alexgeorgiou.gr/wp-content/uploads/2024/12/image-232x300.png 232w, https://www.alexgeorgiou.gr/wp-content/uploads/2024/12/image-768x993.png 768w, https://www.alexgeorgiou.gr/wp-content/uploads/2024/12/image.png 794w" sizes="(max-width: 599px) calc(100vw - 50px), (max-width: 767px) calc(100vw - 70px), (max-width: 991px) 429px, (max-width: 1199px) 637px, 354px" /></figure>



<p>But I digress. So, anyhow, the solution I have decided to go with, is to buy another graphics card that was released five years later, in 2014. This way I can postpone buying new monitors for a while longer. As far as I can tell from scouring the web, the MSI GeForce GT 710 may be sufficiently usable with the Nouveau drivers, albeit <a href="https://askubuntu.com/questions/1209388/poor-graphics-performance-quality-geforce-gt710-2gb-nouveau-drivers-18-04">not without problems</a>. It will arrive today and I will only then know for sure. I may even edit this blog post to let you know.</p>



<p>One thing is for certain. Every Nvidia card that is well-supported by open source drivers retains its value and is immortalized for the ages. The ones that don&#8217;t, necessarily lose their value over time. Binary blobs are great if you own Nvidia stock, but not if you are a consumer of their products. Due to the complexities of software, and because of all kinds of version hell, when you buy a graphics card, you are effectively not buying it, but actually renting it for a few years, until it becomes practically unusable.</p>



<p>I have enormous respect for the people who reverse engineer this hardware, because it&#8217;s a very hard, very specialized type of systems programming that requires a lot of thankless work. Often, when you plug something to your Linux machine and it just works, it&#8217;s a small miracle come true, thanks to these very special people. I don&#8217;t take it for granted.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/old-man-and-his-geforce-210/">🖵 A very sad tale about an old man and his beloved Nvidia GeForce 210 GPU</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/old-man-and-his-geforce-210/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Google Authenticator app 7.0 requires cloud sign-in. Here&#8217;s how to go back to 6.0.</title>
		<link>https://www.alexgeorgiou.gr/revert-google-authenticator-app-to-version-6/</link>
					<comments>https://www.alexgeorgiou.gr/revert-google-authenticator-app-to-version-6/#comments</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Fri, 04 Oct 2024 10:43:37 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[adb]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[downgrade]]></category>
		<category><![CDATA[Google Authenticator]]></category>
		<category><![CDATA[Huawei]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1717</guid>

					<description><![CDATA[<p>How (and why) I reverted my Google Authenticator to a version that doesn't require sign in.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/revert-google-authenticator-app-to-version-6/">Google Authenticator app 7.0 requires cloud sign-in. Here&#8217;s how to go back to 6.0.</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">My backup of 2FA codes failed at the worst time <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f641.png" alt="🙁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></h2>



<p>Being a prudent IT-savvy user, I had exported all my two-factor authentication account data only a few days ago, to an older Android phone that also runs <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&amp;hl=en" target="_blank" rel="noreferrer noopener">Google Authenticator</a>. In fact, I have a calendar reminder to keep this 2FA backup updated, every couple of months. Naturally, I was feeling very safe and smug about it, thinking that I had nothing to worry about.</p>



<p>When I updated my Google Authenticator app to version 7.0, I discovered that the app now requires sign-in to Google. Normally this wouldn&#8217;t be a problem, but my phone is a Huawei phone, and due to the <a href="https://www.reuters.com/article/world/exclusive-google-suspends-some-business-with-huawei-after-trump-blacklist-sou-idUSKCN1SP0N7/" target="_blank" rel="noreferrer noopener">Huawei ban</a>, it cannot connect to Google at the OS level. Instead, I use the Google apps such as Gmail, Calendar and Keep via the <a href="https://brave.com/" target="_blank" rel="noreferrer noopener">Brave browser</a> (Chrome doesn&#8217;t sign in, and Firefox works but much slower). Other than that, it&#8217;s a good phone with a decent camera for its price range, and I&#8217;m happy with it, so I have no reason to change it.</p>



<p>But now, suddenly, I couldn&#8217;t use the Authenticator app, and all my 2FA codes were inaccessible. I immediately retrieved my backup phone from storage, only to find out that the battery had become a <a href="https://www.reddit.com/r/spicypillows/" target="_blank" rel="noreferrer noopener">spicy pillow</a>, and the phone would not start. Aaaargh!</p>



<p>Note that it hadn&#8217;t been more than a few weeks since this backup phone was usable, and I had kept recent backups of my 2FA codes in it. Naturally, I ordered a replacement battery for my backup phone. But this won&#8217;t be here for a few days, and I want access to my online accounts now. </p>



<h2 class="wp-block-heading">Version 7.0: A drastic change. Too drastic, if you ask me!</h2>



<p>My first thought was to look for an older version of the app and install that. Sure enough, looking at the <a href="https://apkpure.com/" target="_blank" rel="noreferrer noopener">APKPure</a>, I found out that version 7.0 has <a href="https://apkpure.com/google-authenticator/com.google.android.apps.authenticator2/download/7.0" target="_blank" rel="noreferrer noopener">this changelog entry</a>:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Cloud syncing: Your Authenticator codes can now be synced to your Google Account and across your devices, so you can always access them even if you lose your phone.</p>
</blockquote>



<p>Wow, thanks Google! That&#8217;s great and all, but unfortunately there is no option to NOT do this. It is not possible to continue using the app without signing in.</p>



<h2 class="wp-block-heading">Downgrading the app from 7.0 to 6.0&#8230;</h2>



<p>So I naturally downloaded version 6.0 from:</p>



<p><a href="https://apkpure.com/google-authenticator/com.google.android.apps.authenticator2/downloading/6.0" target="_blank" rel="noreferrer noopener">https://apkpure.com/google-authenticator/com.google.android.apps.authenticator2/downloading/6.0</a></p>



<p>I got a file named <code>Google Authenticator_6.0_APKPure.apk</code> which I now had to install. But, apparently, it&#8217;s not possible to downgrade Android apps, without first uninstalling the newer version! The Android OS won&#8217;t let you do it. At least not directly.</p>



<p><strong>And I didn&#8217;t want to uninstall the app, because that would presumably delete my 2FA data.</strong></p>



<p>So, what to do?</p>



<p>After some googling, I found out that it&#8217;s possible to install an older version of an Android app via the command line tool <code>adb</code>. I connected the phone to the computer with a USB cable and enabled debug mode.</p>



<p>The command that did the trick:</p>



<pre class="wp-block-code"><code><code>adb install -d Google\ Authenticator_6.0_APKPure.apk</code></code></pre>



<p>And the output I got back:</p>



<pre class="wp-block-code"><code>Performing Streamed Install
Success</code></pre>



<p>The <code>-d</code> option is the one that allows to downgrade the app. Version <code>6.0</code> was installed on my phone, and I regained access to all my 2FA codes.</p>



<p>Hope this helps someone.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/revert-google-authenticator-app-to-version-6/">Google Authenticator app 7.0 requires cloud sign-in. Here&#8217;s how to go back to 6.0.</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/revert-google-authenticator-app-to-version-6/feed/</wfw:commentRss>
			<slash:comments>16</slash:comments>
		
		
			</item>
		<item>
		<title>⎘ Copying markdown from ChatGPT or Gemini to TracWiki using pandoc</title>
		<link>https://www.alexgeorgiou.gr/markdown-to-trac-wiki/</link>
					<comments>https://www.alexgeorgiou.gr/markdown-to-trac-wiki/#respond</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Wed, 18 Sep 2024 15:40:58 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[pandoc]]></category>
		<category><![CDATA[trac]]></category>
		<category><![CDATA[TracWiki]]></category>
		<category><![CDATA[ZimWiki]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1702</guid>

					<description><![CDATA[<p>How to easily copy simple markdown from AI chatbots such as ChatGPT or Gemini and paste it into the Trac issue tracker and wiki.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/markdown-to-trac-wiki/">⎘ Copying markdown from ChatGPT or Gemini to TracWiki using pandoc</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Although I lately do various stuff, such as <a href="https://flutter.dev/">flutter</a> app development, video editing, and even digital marketing, I remain at heart a WordPress plugin developer. And as such, I of course use <a href="https://trac.edgewall.org/"><code>Trac</code></a> for project management.</p>



<p>I don&#8217;t just use generative AI such as <a href="https://chatgpt.com/"><code>ChatGPT</code></a> and <a href="https://gemini.google.com/"><code>Gemini</code></a> for code prototyping, but at all stages of development, starting from analysis and all the way to test generation and even deployment. Very often, especially during the feasibility/analysis stage of product development, I find that an entire answer from our AI overlords is so good, that it can be copied verbatim into a ticket, as part of the analysis.</p>



<p>Chatbots output <a href="https://daringfireball.net/projects/markdown/"><code>markdown</code></a>, but <code>Trac</code> uses <a href="https://trac.edgewall.org/wiki/WikiFormatting"><code>TracWiki</code></a> as its markup format.</p>



<h2 class="wp-block-heading">pandoc writers</h2>



<p>There is no TracWiki <a href="https://pandoc.org/custom-writers.html">writer for <code>pandoc</code></a> that I know of. Writers are <a href="https://www.lua.org/"><code>lua</code></a> scripts that you can use with pandoc to extend its output format capabilities. A writer simply tells <code>pandoc</code> how to output different typographic elements such as paragraph, list, code block, headings, etc.</p>



<p><code>ChatGPT</code> is capable enough to create a rudimentary <code>TracWiki</code> writer for you, but it&#8217;s not perfect. It needs some manual work before it can be called complete. Paragraphs and lists are easy, but writing more complex elements, such as image links, requires some manual work. A fun side-project that I may get to at some time in the future.</p>



<h2 class="wp-block-heading">ZimWiki ≈ TracWiki</h2>



<p>But chatbots only output very basic markdown. A custom writer is not necessary for copying simple markdown from <code>ChatGPT</code> or <code>Gemini</code> to <code>Trac</code>. Instead, you can leverage the fact that the <code>TracWiki</code> markup format is very similar to <a href="https://zim-wiki.org/"><code>ZimWiki</code></a>. And <code>pandoc</code> has a built-in <code>ZimWiki</code> writer. (Incidentally both formats are similar to <a href="http://www.wikicreole.org/"><code>WikiCreole</code></a>).</p>



<h2 class="wp-block-heading">Online tool</h2>



<p>There&#8217;s two ways to do this. The simplest one is to use the online tool for trying out <code>pandoc</code>:</p>



<p><a href="https://pandoc.org/try">https://pandoc.org/try</a></p>



<p>Just copy the markdown and paste it into the tool, select the output format to be <code>zimwiki</code> and click <code>Convert</code>.</p>



<h2 class="wp-block-heading">Shell</h2>



<p>You can also do this on the command line. If you have <code>xclip</code> installed, you can do it with the following one-liner that you can store as an alias:</p>



<pre class="wp-block-code"><code>alias md2trac="xclip -selection clipboard -o | pandoc --from markdown --to zimwiki --no-highlight | xclip -selection clipboard"</code></pre>



<p>Now you can copy the markdown output from ChatGPT, run <code>md2trac</code>, then paste the clipboard contents directly into your <code>Trac</code> ticket or wiki page.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/markdown-to-trac-wiki/">⎘ Copying markdown from ChatGPT or Gemini to TracWiki using pandoc</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/markdown-to-trac-wiki/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>🐈Heartwarming, adorable rescued kitten growing: Day 4 to day 34</title>
		<link>https://www.alexgeorgiou.gr/heartwarming-kitten-rescue-bottle-feeding/</link>
					<comments>https://www.alexgeorgiou.gr/heartwarming-kitten-rescue-bottle-feeding/#comments</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Sat, 15 Jun 2024 15:37:19 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[kitten]]></category>
		<category><![CDATA[rescue]]></category>
		<category><![CDATA[video]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1646</guid>

					<description><![CDATA[<p>Heartwarming Kitten Rescue: 30 Days of Bottle-Feeding</p>
<p>The post <a href="https://www.alexgeorgiou.gr/heartwarming-kitten-rescue-bottle-feeding/">🐈Heartwarming, adorable rescued kitten growing: Day 4 to day 34</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="Heartwarming! Adorable Rescued Kitten Grow: Day 4 to Day 34" width="840" height="473" src="https://www.youtube.com/embed/Sosj7M6wleI?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div><figcaption class="wp-element-caption"><em>A collection of Joey&#8217;s videos from her first month of life.</em></figcaption></figure>



<p>This is Joey.</p>



<p>Joey is a kitten that was born prematurely, weighing just 85 grams, and was abandoned by her mom.</p>



<p>We found her the day she was born, and we have been bottle-feeding her.</p>



<p>Thanks to advice from the <a href="https://www.kittenlady.org/">Kitten Lady&#8217;s website</a>, and after a lot of sleepless days and nights, we are now looking hopefully to her future.</p>



<p>The first few days were the hardest. We got a heating pad to keep her warm, gave her warm formula milk to drink every two hours from a bottle, and we treated her neonate ophtalmia infection with an antibiotic. A runaway infection could have cost her his eye, but is easily treatable with eye drops.</p>



<p>It&#8217;s been 40 days and she&#8217;s doing well, now starting to eat some meat along with his milk. She is now starting to go potty to her non-clumping pellet litterbox, all by herself some times!</p>



<p>Joey is a fighter and a good climber. She feels very safe and loved in our home.</p>



<p>She&#8217;s growing to be an amazing cat, and she&#8217;s already more than 400 grams in weight. She loves to explore the house and to hang around with us.</p>



<p>Lately she has mastered an essential cat skill: walking on the computer keyboard.</p>



<p>Her most recent manuscript:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>1qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq`qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq2ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc222222222222222222222kkkkkkkkkkkkkkkkkkkkkkkkk &#8221;…..]]iiiiiiiiiiiiiiiiiiiiiiiiiim</p>
<cite>Joey, June 2024</cite></blockquote>



<p>Video music by <a href="https://pixabay.com/users/cookey12563-42738903/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=music&amp;utm_content=194811"></a><a href="https://pixabay.com/users/cookey12563-42738903/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=music&amp;utm_content=194811">Cookey12563</a> from <a href="https://pixabay.com//?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=music&amp;utm_content=194811">Pixabay</a></p>
<p>The post <a href="https://www.alexgeorgiou.gr/heartwarming-kitten-rescue-bottle-feeding/">🐈Heartwarming, adorable rescued kitten growing: Day 4 to day 34</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/heartwarming-kitten-rescue-bottle-feeding/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>🎞 Engaging video shorts with subtitles, the open source way</title>
		<link>https://www.alexgeorgiou.gr/engaging-video-shorts-with-subtitles/</link>
					<comments>https://www.alexgeorgiou.gr/engaging-video-shorts-with-subtitles/#comments</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Thu, 14 Mar 2024 12:01:13 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[AdMob]]></category>
		<category><![CDATA[advertising]]></category>
		<category><![CDATA[Aegisub]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Audacity]]></category>
		<category><![CDATA[marketing]]></category>
		<category><![CDATA[OpenShot]]></category>
		<category><![CDATA[promotion]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[SubRip]]></category>
		<category><![CDATA[subtitles]]></category>
		<category><![CDATA[wordpress]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1518</guid>

					<description><![CDATA[<p>How I create engaging video shorts with subtitles for promoting my Android apps.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/engaging-video-shorts-with-subtitles/">🎞 Engaging video shorts with subtitles, the open source way</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Today I share with you some of my most recent thoughts on marketing, followed by some technical advice on how to create video shorts. If you are just here for the technical advice on video shorts, scroll down.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="552" src="https://www.alexgeorgiou.gr/wp-content/uploads/2024/03/image-1-1024x552.png" alt="Using Audacity to add subtitles to the voice track of a video short ad." class="wp-image-1520" srcset="https://www.alexgeorgiou.gr/wp-content/uploads/2024/03/image-1-1024x552.png 1024w, https://www.alexgeorgiou.gr/wp-content/uploads/2024/03/image-1-300x162.png 300w, https://www.alexgeorgiou.gr/wp-content/uploads/2024/03/image-1-768x414.png 768w, https://www.alexgeorgiou.gr/wp-content/uploads/2024/03/image-1-1536x828.png 1536w, https://www.alexgeorgiou.gr/wp-content/uploads/2024/03/image-1.png 1600w" sizes="auto, (max-width: 599px) calc(100vw - 50px), (max-width: 767px) calc(100vw - 70px), (max-width: 991px) 429px, (max-width: 1199px) 637px, 354px" /><figcaption class="wp-element-caption"><em>Using Audacity to add subtitles to the voice track of a video short ad.</em></figcaption></figure>



<p>Lately I&#8217;ve been working less on product development, and more on promoting my existing products, mostly <a href="https://play.google.com/store/apps/developer?id=Alexandros+Georgiou" data-type="link" data-id="https://play.google.com/store/apps/developer?id=Alexandros+Georgiou" target="_blank" rel="noreferrer noopener">my Android apps</a>, but also my <a href="https://dashed-slug.net/" target="_blank" rel="noreferrer noopener">WordPress plugins</a>. One of the apps seems like it is beginning to take off organically just by being on Google Play, but I feel that all of them could benefit from a little extra push.</p>



<h2 class="wp-block-heading">I don&#8217;t hate marketing any more</h2>



<p>As much as I <a href="https://www.youtube.com/watch?v=tHEOGrkhDp0&amp;ab_channel=TyrellEdwards" data-type="link" data-id="https://www.youtube.com/watch?v=tHEOGrkhDp0&amp;ab_channel=TyrellEdwards" target="_blank" rel="noreferrer noopener">don&#8217;t love marketing or marketers</a>, after trying to do it myself, I learned a lot about what it entails, and I even gained a little appreciation towards the discipline (not a lot, mind you). Getting your product known to potential users is a necessity, and I now realize that what people mean when they say &#8220;I hate marketing&#8221;, is that they hate marketing that is done badly. With marketing, essentially, you&#8217;re just telling people &#8220;Hey, I made this, maybe you&#8217;re interested in it?&#8221; There&#8217;s nothing wrong with marketing when done right, as long as you don&#8217;t annoy people.</p>



<p>An example of right versus wrong kind of marketing: I see a lot of app developers fail because they use interstitial ads too much in their apps. These are the type of intrusive ads that have the highest payout, but also they annoy users the most (what with being intrusive and all). In my apps, I like to focus on serving banner ads, and only show interstitial ads for features that are beyond the main value proposition of my app. Interstitial ads are OK, if you first negotiate with your users, honestly and without deception, that they will sit through an annoying ad and then they&#8217;ll get something extra. Something that they didn&#8217;t expect to get when they downloaded the app. I&#8217;m not going to get any ad revenue if the user gets annoyed and uninstalls my app. On the other hand, if the app satisfies a need, a small banner at the bottom of the screen is not going to annoy users. A banner ad can generate significant revenue, once you hit a respectable number of users.</p>



<h2 class="wp-block-heading">Some key insights</h2>



<p>As for advertising my own apps, I&#8217;m only an amateur marketer, but already I&#8217;ve identified two key insights that are prevalent in the industry:</p>



<p>1. <span style="text-decoration: underline;">Video is king.</span></p>



<p>2. <span style="text-decoration: underline;">Only losers pay for sex and marketing.</span></p>



<p>The first one is obvious. The main message of your campaign should be available in all modes of media: text, images, and video. But video is king. It&#8217;s engaging. It&#8217;s rich. It sets its own pace. It stimulates two senses (sight and hearing). Reading is so last-century. Show-and-tell is more effective than a long article explaining things. <span style="text-decoration: underline;">Customers want solutions, not lectures.</span> (OK, that&#8217;s actually a third insight!)</p>



<p>The second insight is less obvious. Paying for ads only works marginally well, once you have an excellent product and an excellent well-tuned ad. If you don&#8217;t know exactly what you&#8217;re doing, you can burn through your advertising budget pretty quickly with very poor results.</p>



<p>And if you don&#8217;t know what you&#8217;re doing, you don&#8217;t want to rely on a marketing agency in my opinion. That&#8217;s even less efficient than just paying for ads. It&#8217;s best to learn the skill yourself. I&#8217;m generally a DIY type of guy, I only rely on a professional for accounting.</p>



<p>DIY posting on social media, can be more effective: It&#8217;s free. It&#8217;s organic. You get analytics feedback that helps you learn the skill. Your viewers let their guard down and give a chance to your message. <a href="https://en.wikipedia.org/wiki/Banner_blindness" target="_blank" rel="noreferrer noopener">Banner blindness</a> is a thing.</p>



<h2 class="wp-block-heading">Shorts: not just a type of pants suitable for warm weather</h2>



<p>Video shorts are vertical videos that have an aspect ratio of 9:16, and are typically 1080&#215;1920 if you aim for HD. All video platforms now have them. They are suitable for viewing by teenagers, with a phone held vertically.</p>



<p>Every platform has its own set of technical requirements or recommendations. For compatibility with most platforms, I aim at 30 to 45 seconds. (Incidentally, this is the attention span of the average zoomer.)</p>



<h2 class="wp-block-heading">Here&#8217;s a few more insights to consider, in no particular order:</h2>



<ul class="wp-block-list">
<li>Start by writing the short video&#8217;s script before doing anything else. <strong>Use short sentences.</strong> Active voice. Simple vocabulary. You are speaking English potentially to people who struggle with English, or who are young, or tired, or distracted. But even your more sophisticated viewers will respond better to direct messages. </li>



<li><strong>Here&#8217;s a good structure:</strong> Start with a hook (why should the viewer even look at your video and not skip?) This will be a joke, something interesting, a problem they may be facing, an engaging meme or video clip, a face, something. This must be done in the first 2 or 3 seconds of the video. Then, take a couple of seconds to introduce the name of the product. Then, in the body of your message, showcase the product, illustrating exactly three (3) main points about it. Three is a good number. Then repeat the name of the product. Repetition is key. Finally, close by showing the product again, (remember, repetition is key), and add your call to action. What do you want the customer to do? Go to your website? Download the app? Contact you? Where? Make it easy and direct. Add all the recognizing features of your product there. Branding (colors, fonts, tagline, icon, any graphics) should all be prominent in the last scene. If you are offering something for free, such as in a freemium business model, make sure to make FREE the last word, because that&#8217;s what people will remember.</li>



<li><strong>There is going to be a voice talking.</strong> It&#8217;s better if it&#8217;s a female voice. Females sound attractive to your male audience, and appear less threatening to your female audience. Nobody wants to listen to a guy mansplaining your product.</li>



<li>Voice is not enough. Many people have their phone&#8217;s volume on mute. <strong>Subtitles must be available.</strong> Every. Single. Word. Is. Its. Own. Separate. Subtitle. Subtitles must be near the center of the screen, but maybe shifted slightly to the bottom or top so as to leave space for you to showcase the product. Text must be in a heavy sans-serif fond, with bright color, and with a thin dark border. You want the text to be readable on all kinds of background. If the platform supports it, also add the subtitles as text, even though you&#8217;ve hard-coded them into the video. I heard people like subtitles, so I add subtitles to my subtitles.</li>



<li>All platforms let you <strong>add some text to your video</strong>. Again, keep it short and to the point. Preferably the text should not deviate too much from the transcript of the video, which is your main message. Repeat your message there in the same template. Repetition is key.</li>



<li>You can <strong>use memes to make your message more engaging</strong>. Memes are not copyrighted, but are easily recognizable. Adding an element of humor to your ad can go much further than spending a big budget on a marketing campaign.</li>



<li>Graphics and animations are what make your video look professional. You can get graphics from stock image repositories. Be mindful of any copyrights. </li>



<li>There is <a href="https://en.wikipedia.org/wiki/Color_psychology#Uses_in_marketing" target="_blank" rel="noreferrer noopener"><strong>theory behind colors</strong></a>. Blue and white conveys seriousness, etc. Same for fonts, design, etc. Know when to use serif versus sans serif fonts. You don&#8217;t have to be a graphics designer, but you should take some time to study the basics. Be aware of <a href="https://en.wikipedia.org/wiki/Visual_hierarchy" target="_blank" rel="noreferrer noopener">visual hierarchy</a>. Learn these things, and you&#8217;ll never have to talk to a &#8220;creative&#8221; person again in your life. Big plus!</li>



<li>You should likely <strong>add some background music at low volume</strong>. The music should not overpower the speech or distract from your message. Viewers must subconsciously associate your product with positive, upbeat feelings. The music must be upbeat, and must strike a stark contrast to their otherwise miserable depressed life. The music must also be somewhat repetitive, so as not to distract from your messaging. Bonus points if it has a hypnotic element (ads work best by hypnotizing the logical faculties and appealing to emotion). The music must appeal to a wide array of musical tastes. <strong>Acoustic guitar riffs are the cornerstone of advertising</strong>. You can find royalty free music on <a href="https://pixabay.com/" target="_blank" rel="noreferrer noopener">pixabay</a>. Be mindful of any usage rights. Sometimes the artist has submitted the music to a copyright ID platform, and you may have to request and submit usage rights. But things are simpler if you find music that is completely free to use.</li>
</ul>



<h2 class="wp-block-heading">Spam your ad to all the platforms that support video shorts</h2>



<p>Video costs a lot of time and effort to make. At my current skill level, one video short takes about a day of work. But once you invest this cost, now your finished video can be thought of as capital that you own. You can show repeatedly, for little marginal cost, on all the platforms where people typically go to dump their garbage or to watch other people&#8217;s trash:</p>



<ul class="wp-block-list">
<li>YouTube</li>



<li>Instagram</li>



<li>Pinterest</li>



<li>Tumblr</li>



<li>TikTok</li>



<li>Triller</li>



<li>Snapchat</li>



<li>Twitter</li>



<li>LinkedIn</li>
</ul>



<h2 class="wp-block-heading">Only losers pay for video editing</h2>



<p>Before you go and pirate Adobe Premier from a torrent site, or pay some dude on Fiverr to make you a video, consider this: You can do it with open source software for zero cost, other than your time. And it&#8217;s a useful skill that you want to learn yourself, not delegate to some marketing-agency-dwelling type of creep. (Sorry, sorry, I promised I&#8217;m done with hating on marketing. I will be on my best behavior from now on.)</p>



<p>I was under the impression that <a href="https://en.wikipedia.org/wiki/OpenShot" target="_blank" rel="noreferrer noopener">OpenShot</a> was somewhat limited and buggy. That&#8217;s because I used to install it via the Ubuntu repositories on my machine. For some reason, the repository has an ancient 2.x version of OpenShot which is very buggy. It crashes often, and produces nasty sound artifacts. This is not the case with the latest version of OpenShot. You should take a minute to <a href="https://www.openshot.org/ppa/" target="_blank" rel="noreferrer noopener">add their official PPA</a> and install the latest stable version from there.</p>



<p>Learning OpenShot is a breeze. It&#8217;s popular, so there are countless tutorials on Youtube. Focus on learning how to do animations by adding keyframes to your media. I will discuss here only two technical aspects that I found required a little bit of digging: How to create video shorts and how to add subtitles.</p>



<h3 class="wp-block-heading">Shorts custom video profile</h3>



<p>When you start a video project, you want to define the output video format. There are several templates available, but none corresponds directly to the specifications that we want for a video short. We are going to create a custom profile. Create the following file: <code>~/.openshot_qt/profiles/shorts</code> with the following content:</p>



<pre class="wp-block-code"><code>description=Short Vertical Video
frame_rate_num=24000
frame_rate_den=1000
width=1080
height=1920
progressive=1
sample_aspect_num=1
sample_aspect_den=1
display_aspect_num=9
display_aspect_den=16</code></pre>



<p>If you&#8217;re wondering what these mean, you can have a look at the <a href="https://www.openshot.org/static/files/user-guide/profiles.html#custom-profile" target="_blank" rel="noreferrer noopener">documentation for custom profiles</a>.</p>



<p>Now you can start a new project with the &#8220;Short Vertical Video&#8221; profile.</p>



<h3 class="wp-block-heading">Creating subs, the DIY way</h3>



<p>There are, of course, a multitude of speech-to-text AI tools that will create subtitles for you automatically. One that works well is <a href="https://submagic.co/?via=alexandros80">SubMagic</a>. It offers many features including colors and emojis, and is well suited for social media videos.</p>



<p>But if your video is not too long, you can do it all by yourself. The advantage is that you don&#8217;t pay any money, and you are certain that there are no mistakes. You also have full control on where to break your sentences.</p>



<p>The best way to manually create subs is, surprisingly, not using a tool such as <a data-type="link" data-id="https://aegisub.org/" href="https://aegisub.org/" target="_blank" rel="noreferrer noopener">AegisSub</a>, although it too can work well. I&#8217;ve found that the best tool for the job is <a href="https://www.audacityteam.org/" target="_blank" rel="noreferrer noopener">Audacity</a>. Here&#8217;s how you do it:</p>



<ol class="wp-block-list">
<li>First, render the audio of your final video.</li>



<li>Load your audio in Audacity.</li>



<li>Go to <em>Tracks</em> → <em>Add New</em> → <em>Label Track</em>.</li>



<li>Select the first word. You can press <em>Shift</em>+<em>Space</em> to listen to your selection on repeat and ensure that you&#8217;ve selected the whole word and nothing more.</li>



<li>Press Ctrl+B to add a label.</li>



<li>In the box, type the word. In CAPITAL letters. This is for the TikTok generation. Don&#8217;t use punctuation unless absolutely necessary.</li>



<li>Deselect your selection, and select the next word. Audacity will help you by snapping the beginning of your selection to the end of the label you just entered.</li>



<li>Repeat steps 4 to 7 until you have done the entire length of the track. It gets easier as you learn how each phoneme looks like as a wave on the screen. For a 30 to 45 second video, it&#8217;s very doable.</li>



<li>Save your project as an <code>.aup</code> file, because you may want to go back to it and make changes, especially if you made a mistake.</li>



<li>Go to <em>File</em> → <em>Export</em> → <em>Export labels</em>. Save your labels to a text file. This file is not yet in a format that can be imported into OpenShot.</li>



<li>Go to <a href="https://magcius.github.io/audaciter/" target="_blank" rel="noreferrer noopener">https://magcius.github.io/audaciter/</a> and use the tool to convert your labels file to a <a href="https://en.wikipedia.org/wiki/SubRip" data-type="link" data-id="https://en.wikipedia.org/wiki/SubRip" target="_blank" rel="noreferrer noopener">SubRip (<code>.srt</code>)</a> file. SubRip is <em>almost</em> the format that we want to use in OpenShot, but not quite.</li>



<li>In SubRip, each entry starts off with a line that is the numerical index of the entry, then another line with the time range (two timestamps separated by <code>--&gt;</code>), and finally one or more lines of text, followed by a blank line. OpenShot doesn&#8217;t like the line with the numerical index, so we remove it from our <code>.srt</code> file. You can replace the following regular expression that matches these lines, with the empty string: <code>^\d+$</code></li>



<li>Go to your video project in OpenShot. With a finished video, you will have several tracks full of small fragments of video clips, audio clips, and graphics being animated. You don&#8217;t want to attach your subs to any of that mess. Create a separate track for your subs.</li>



<li>In the new track, add a 1080&#215;1920 PNG image that is completely transparent. <a href="https://www.alexgeorgiou.gr/wp-content/uploads/2024/03/transparency.png">Here&#8217;s one.</a></li>



<li>Stretch the image so it &#8220;appears&#8221; over the entire length of the video. (Of course, if your image is truly transparent, it won&#8217;t actually appear.)</li>



<li>Ensure that your cursor is positioned at the beginning of your track. (Use <em>Ctrl</em>+← to navigate to the beginning). If you don&#8217;t do this, when you later edit the Caption properties you will create a new keyframe, and you don&#8217;t want that. You want the properties that you&#8217;re about to enter to apply to the entire track.</li>



<li>Go to the <em>Effects</em> tab, and drag the <em>Caption</em> effect onto the transparent image in your subtitles track.</li>



<li>The letter <em>C</em> will appear on the track. Click on it to edit the properties of the <em>Captions</em> effect.</li>



<li>I will share here the settings that I like to use in my videos. You can experiment with other settings of course. Font: <code>Arial Black Bold 100pt</code> (or any heavy font). Font size: <code>65</code>. Font alpha: <code>255</code>. Font color: <code>Yellow #FFFF00</code>. Stroke width: <code>3</code>. Border: <code>black #000000</code>. Left size: <code>0</code>. Right size: <code>0</code>. Top size: <code>0.75</code>.</li>



<li>Now go to the right side of the screen, and paste the SubRip subs (with the index lines removed).</li>



<li>Watch the video to ensure that all the subs are shown. If a line is too wide, it will not be shown. So check the lines with the most text. If a line is not shown, either break it into two, or reduce the font size. Your lines should be individual words, so they should not be too long.</li>
</ol>



<p>That&#8217;s it. You can now render your ad and spam-post it on all the aforementioned platforms. Add it into your articles, <a data-type="link" data-id="https://yoast.com/on-page-video-seo/" href="https://yoast.com/on-page-video-seo/">above the fold</a>. Then, post links to your video shorts again and again. Repetition is key.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/engaging-video-shorts-with-subtitles/">🎞 Engaging video shorts with subtitles, the open source way</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/engaging-video-shorts-with-subtitles/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>📦 Two dockerized WordPress sites, with Let&#8217;s Encrypt, logging, SMTP relay, controlled by a systemd service, and daily backups</title>
		<link>https://www.alexgeorgiou.gr/two-dockerized-wordpress-sites/</link>
					<comments>https://www.alexgeorgiou.gr/two-dockerized-wordpress-sites/#respond</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Wed, 20 Dec 2023 10:24:43 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[backup]]></category>
		<category><![CDATA[compose]]></category>
		<category><![CDATA[cron]]></category>
		<category><![CDATA[database]]></category>
		<category><![CDATA[digitalocean]]></category>
		<category><![CDATA[DNS]]></category>
		<category><![CDATA[docker]]></category>
		<category><![CDATA[letsencrypt]]></category>
		<category><![CDATA[mysqldump]]></category>
		<category><![CDATA[nginx]]></category>
		<category><![CDATA[reverse proxy]]></category>
		<category><![CDATA[SMTP]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[wordpress]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1311</guid>

					<description><![CDATA[<p>Or, How I learned to stop worrying and love docker compose.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/two-dockerized-wordpress-sites/">📦 Two dockerized WordPress sites, with Let&#8217;s Encrypt, logging, SMTP relay, controlled by a systemd service, and daily backups</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In this article I&#8217;m going to talk about how I set up two WordPress sites on one server. None of the articles I could come up with covered all the topics I was interested in. Not exactly groundbreaking, in fact it sounds simple. But the devil is in the details. To actually perform such a setup for the first time is actually pretty daunting. From setting up the DNS records to getting file permissions to work, to getting the reverse proxy right, it&#8217;s all a complicated mess that I&#8217;m going to delineate for you (and me) here, while it&#8217;s still fresh in my head.</p>



<h1 class="wp-block-heading">Features</h1>



<ul class="wp-block-list">
<li>Two WordPress sites: <code>https://www.example1.com</code> and <code>https://www.example2.com</code>.</li>



<li>Redirects from <code>https://example1.com</code>, <code>http://example1.com</code>, <code>http://www.example1.com</code> to <code>https://example1.com</code> (and the same for <code>example2.com</code>).</li>



<li>Let&#8217;s encrypt certificates.</li>



<li>WordPress debug logging with logrotate. I have ranted previously about <a href="https://www.alexgeorgiou.gr/wordpress-production-to-log-or-not-to-log/">why I think having debug logging turned on is important on live sites</a>.</li>



<li>Emails must work in WordPress.</li>



<li>The containers must run as a service, so that they start with system start, and exit gracefully on system shutdown.</li>



<li>Daily backups of all WordPress files and MySQL databases.</li>



<li>Networks of the two sites should be isolated for security.</li>
</ul>



<h1 class="wp-block-heading">Shameless plug of my referral links</h1>



<p>We start with a hosted server. This can be a dedicated server or a server slice. Hosting providers that I like are:</p>



<ul class="wp-block-list">
<li><a href="https://www.digitalocean.com/?refcode=44d4d2184573">DigitalOcean</a> &#8211; Using this link you get a $200, 60-day credit to try their products. If you spend $25 after your credit expires, I will get $25 in credit.</li>



<li><a href="https://hostinger.com/?REFERRALCODE=1ALEXANDROS15">Hostinger</a> &#8211; You don&#8217;t get anything with this link, except for a great hosting service. Again, I get a commission from this link if you stick with Hostinger for 45 days. Think of it as my reward for writing such a great article for you.</li>
</ul>



<p>I have a Debian droplet on DigitalOcean with 2GB of RAM, but with some tweaking it&#8217;s possible to squeeze two low-traffic WordPress sites in 1GB, if you really need to keep the monthly costs down.</p>



<h1 class="wp-block-heading">First let&#8217;s get the (DNS) record straight</h1>



<p>The first order of business is to setup the DNS records. We&#8217;re going to need two <code>A</code> records to point to our server&#8217;s IP, and two <code>CNAME</code> records that will be <code>wwww.</code> aliases of the bare domain. Oh, and we&#8217;ll need some <code>NS</code> records to point to the domain name provider (in this case Digital Ocean).</p>



<figure class="wp-block-table"><table><thead><tr><th>Type</th><th>Hostname</th><th>Value</th><th>TTL</th></tr></thead><tbody><tr><td><code>A</code></td><td><code>example1.com</code></td><td>(my server&#8217;s IP)</td><td>1800</td></tr><tr><td><code>A</code></td><td><code>example2.com</code></td><td>(my server&#8217;s IP)</td><td>1800</td></tr><tr><td><code>CNAME</code></td><td><code>www.example1.com</code></td><td>alias of <code>example1.com.</code></td><td>1800</td></tr><tr><td><code>CNAME</code></td><td><code>www.example2.com</code></td><td>alias of <code>example2.com.</code></td><td>1800</td></tr><tr><td><code>NS</code></td><td><code>example1.com</code></td><td><code>ns1.digitalocean.com</code></td><td>14400</td></tr><tr><td><code>NS</code></td><td><code>example1.com</code></td><td><code>ns2.digitalocean.com</code></td><td>14400</td></tr><tr><td><code>NS</code></td><td><code>example1.com</code></td><td><code>ns3.digitalocean.com</code></td><td>14400</td></tr><tr><td><code>NS</code></td><td><code>example2.com</code></td><td><code>ns1.digitalocean.com</code></td><td>14400</td></tr><tr><td><code>NS</code></td><td><code>example2.com</code></td><td><code>ns2.digitalocean.com</code></td><td>14400</td></tr><tr><td><code>NS</code></td><td><code>example2.com</code></td><td><code>ns3.digitalocean.com</code></td><td>14400</td></tr></tbody></table></figure>



<p>I like to keep the TTL (Time-To-Live) values low until I&#8217;m finished with my setup. I&#8217;ve set everything to <code>1800</code> seconds which is half an hour. Once I&#8217;m sure that everything is OK, I can increase the values to something larger like <code>14400</code> (four hours).</p>



<h1 class="wp-block-heading">ssh</h1>



<p>We are going to need to be able to login to the server with a passwordless setup.</p>



<p>Login as root to the new server via the admin console.</p>



<p>Create a regular user with <code>adduser</code>:</p>



<pre class="wp-block-code"><code><code>a<span style="background-color: initial; font-family: inherit; font-size: inherit; color: initial;">dduser yourusername</span></code></code></pre>



<p>Then add the user to sudoers with:</p>



<pre class="wp-block-code"><code><code>usermod -aG sudo yourusername</code></code></pre>



<p>(Replace <code>yourusername</code> with your username.)</p>



<p>Once we are on our local machine, we check if we already have an ssh key with:</p>



<pre class="wp-block-code"><code><code>ls -al ~/.ssh/id_*.pub</code></code></pre>



<p>If we don&#8217;t have any, we can generate one with:</p>



<pre class="wp-block-code"><code><code>ssh-keygen -t rsa -b 4096 -C "your_email@domain.com"</code></code></pre>



<p>Once we are sure that there is a key, we upload it to the new server with:</p>



<pre class="wp-block-code"><code><code>ssh-copy-id yourusername@server_ip_address</code></code></pre>



<p>(Again replace <code>yourusername</code> with your remote username, and <code>server_ip_address</code> with your ip address. You will need to enter the password you entered in <code>adduser</code>.)</p>



<h1 class="wp-block-heading">Docker compose</h1>



<p>First, let&#8217;s install docker on the server by following the <a href="https://docs.docker.com/engine/install/debian/" target="_blank" rel="noreferrer noopener">installation instructions for Debian</a>. I am not going to repeat the instructions here. If you have chosen a different distro, follow the respective instructions.</p>



<p>We are going to create a <code>docker-compose.yml</code> file. This file describes how the different docker containers are orchestrated.</p>



<p>We are going to need four containers:</p>



<ul class="wp-block-list">
<li>Two databases for the two sites.</li>



<li>Two WordPress installations.</li>
</ul>



<p>I&#8217;m first going to show some simple compose configs with the basics, then we are going to add the bells and whistles. Here goes:</p>



<h2 class="wp-block-heading">Two databases, sitting in a server</h2>



<pre class="wp-block-code"><code>version: "3.8"

name: droplet

networks:
    net1:
    net2:

volumes:
  db1volume:
  db2volume:

services:

  db1:
    image: mysql:8.2.0
    networks:
      - net1
    restart: unless-stopped
    expose:
      - "3306"
    volumes:
      - db1volume:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: wp1_root_pass
      MYSQL_DATABASE: wp_db1
      MYSQL_USER: db1_user
      MYSQL_PASSWORD: db1_pass

  db2:
    image: mysql:8.2.0
    networks:
      - net2
    restart: unless-stopped
    expose:
      - "3306"
    volumes:
      - db2volume:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: wp2_root_pass
      MYSQL_DATABASE: wp_db2
      MYSQL_USER: db2_user
      MYSQL_PASSWORD: db2_pass</code></pre>



<p>There&#8217;s already a lot going on here:</p>



<ul class="wp-block-list">
<li>We are defining our composition to have a name. Here I am using <code>droplet</code>. This will also be the prefix for the names of all the containers.</li>



<li>We are defining two networks, <code>net1</code> and <code>net2</code>. Only containers on the same network can talk to each other. We don&#8217;t want our <code>example1.com</code> WordPress to have any access to the MySQL database of <code>example2.com</code>.</li>



<li>Next we are defining two identical <code>mysql:8.2.0</code> containers, named <code>db1</code> and <code>db2</code>.</li>



<li>Each of the two databases is put in its respective network (<code>net1</code> and <code>net2</code>).</li>



<li>We want a database that has crashed to restart, unless we explicitly stop it.</li>



<li>We are going to let the databases listen to TCP port <code>3306</code>. This is the port where WordPress will connect. All other ports are firewalled.</li>



<li>We are going to mount the <code>/var/lib/mysql</code> directories into docker volumes named <code>db1volume</code> and <code>db2volume</code>.</li>



<li>Next we are going to use some environment variables that the startup script inside the mysql image recognizes. These will set up a root password, a new empty database, and a username/password pair that WordPress will use to access this new database. The startup script will do all the <code>CREATE DATABASE</code>, <code>CREATE USER</code> and <code>GRANT</code> magic for us. You can learn more about the MySQL docker image <a href="https://dev.mysql.com/doc/mysql-installation-excerpt/8.2/en/docker-mysql-more-topics.html">here</a>.</li>
</ul>



<h2 class="wp-block-heading">A tale of two WordPresses</h2>



<p>Next, let&#8217;s also add the two WordPress services (these also go under the services section along with the databases):</p>



<pre class="wp-block-code"><code>  wp1:
    image: wordpress:latest
    networks:
      - net1
    depends_on:
      - db1
    user: 1000:1000
    restart: unless-stopped
    expose:
      - "80"
    volumes:
      - ./wp1fs:/var/www/html
    ports:
      - "127.0.0.1:8101:80"
    environment:
      WORDPRESS_DB_HOST: db1:3306
      WORDPRESS_DB_NAME: wp_db1
      WORDPRESS_DB_USER: db1_user
      WORDPRESS_DB_PASSWORD: db1_pass
      WORDPRESS_DEBUG: true

  wp2:
    image: wordpress:latest
    networks:
      - net2
    depends_on:
      - db1
    restart: unless-stopped
    expose:
      - "80"
    volumes:
      - ./wp2fs:/var/www/html
    ports:
      - "127.0.0.1:8102:80"
    environment:
      WORDPRESS_DB_HOST: db2:3306
      WORDPRESS_DB_NAME: wp_db2
      WORDPRESS_DB_USER: db2_user
      WORDPRESS_DB_PASSWORD: db2_pass
      WORDPRESS_DEBUG: true</code></pre>



<ul class="wp-block-list">
<li>We have named the two WordPress containers <code>wp1</code> and <code>wp2</code> and assigned them to our two networks, <code>net1</code> and <code>net2</code>.</li>



<li>We have defined that these <em>depend</em> on their respective databases to function.</li>



<li>We have defined that these containers are to be <em>restarted</em> if they crash, but not if we explicitly stop them.</li>



<li>We are exposing only HTTP port <code>80</code> to the networks. All other ports are firewalled. We are not exposing port <code>443</code> here. TLS encryption will be done at the host level that will run the reverse proxy (see below).</li>



<li>We are mounting two local directories here <code>./wp1fs</code> and <code>./wp2fs</code>. These will contain the WordPress installations. The first time that the containers run, WordPress will be installed in them. A special <code>wp-config.php</code> file will be placed in there. This file pulls the DB connection settings from the environment variables that we specify below.</li>



<li>We are port-mapping the HTTP <code>80</code> ports to the host&#8217;s ports <code>8101</code> and <code>8102</code>. These are the ports that the reverse proxy will use. They are bound to the loopback network (<code>127.0.0.1</code>), and are therefore not exposed to the outside world. If we had used just <code>8101:80</code>, this would map port 80 of the container to port <code>8101</code> of the host on all network interfaces, including the one facing the outside world. This is not ideal. We only want access to our services through our reverse proxy.</li>



<li>The <code>WORDPRESS_*</code> environment variables are specific to this wordpress image. We specify the databases, the login credentials that we also specified above, and we turn on debug logging. To learn more about these environment variables, click <a href="https://github.com/docker/awesome-compose/tree/master/official-documentation-samples/wordpress/" target="_blank" rel="noreferrer noopener">here</a>.</li>
</ul>



<p><em>NOTE: I have made the decision here to put the databases into system volumes (these live usually in <code>/var/lib/docker/volumes</code> and can be shared between containers, the WordPress filesystems are mounted in local directories which I call <code>wp1volume</code> and <code>wp2volume</code>. If you prefer to have all volumes unde <code>/var/lib</code>, you can delete the <code>./</code> prefix in front of the volume names.</em></p>



<h2 class="wp-block-heading">The bells and whistles</h2>



<p>If you thought that&#8217;s enough, <strong>you are gravely mistaken</strong>. Here&#8217;s a few more things to take care of:</p>



<h3 class="wp-block-heading">Database collation</h3>



<p>We are going to set the databases a UTF-8 multibyte collation for unicode support. Under the environment variables in the database services, we are going to add an explicit mysqld command:</p>



<pre class="wp-block-code"><code>command: "mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci</code></pre>



<p>And under the WordPress services, we are going to add the following environment variable:</p>



<pre class="wp-block-code"><code>  WORDPRESS_DB_COLLATE: utf8mb4_unicode_ci</code></pre>



<h2 class="wp-block-heading">File permissions</h2>



<p>If we run the above containers, WordPress won&#8217;t be able to install or remove any themes or plugins, and it won&#8217;t be able to do anything that requires writing to the file system.</p>



<p>This is because, in the WordPress images, the user that runs apache has a different uid and guid than the file system. The files are owned by <code>uid</code> <code>1000</code> and <code>guid</code> <code>1000</code>. We can specify that the user running stuff inside the container has the same numeric ids. To do this, we add the following to the two WordPress services:</p>



<pre class="wp-block-code"><code>user: 1000:1000</code></pre>



<h2 class="wp-block-heading">Database memory</h2>



<p>By default, a mysql instance will take up at least 360MB of memory once it&#8217;s running. Most of it is because of the Performance Schema instruments, which take up a lot of memory.</p>



<p>The Performance Schema is a database that keeps track of the mysqld server&#8217;s performance, and is useful for diagnostics. If you are not going to use this feature, then you can turn it off. The memory usage of each DB container will then fall to a little over 100MB.</p>



<p>We are going to create a file named <code>disable-perf-schema.cnf</code> with the following contents:</p>



<pre class="wp-block-code"><code>&#91;mysqld]
performance_schema = OFF</code></pre>



<p>This will be added to the mysql server&#8217;s config files. The server includes any <code>.cnf</code> files in the <code>/etc/mysql/conf.d</code> directory into its configuration. We can use the volumes section to map this file into our two db containers:</p>



<pre class="wp-block-code"><code>volumes:
  - db1:/var/lib/mysql
  - ./disable-perf-schema.cnf:/etc/mysql/conf.d/disable-perf-schema.cnf

volumes:
  - db2:/var/lib/mysql
  - ./disable-perf-schema.cnf:/etc/mysql/conf.d/disable-perf-schema.cnf</code></pre>



<p>There are more hacks to reduce the memory usage of mysqld, but these are beyond the scope of this article. For example, you can look into reducing the InnoDB buffer pool size.</p>



<h2 class="wp-block-heading">Log rotate</h2>



<p>We have enabled debug logging, because <a href="https://www.alexgeorgiou.gr/wordpress-production-to-log-or-not-to-log/">reasons</a>. This is cool, but the <code>/var/www/html/wp-content/debug.log</code> files will eventually fill up our containers if left unchecked. Enter <code>logrotate</code> to the rescue:</p>



<p>We are going to create a file named <code>wordpress.logrotate</code> with the following content:</p>



<pre class="wp-block-code"><code>/var/www/html/wp-content/debug.log
{
        su 1000 1000
        rotate 24
        copytruncate
        weekly
        missingok
        notifempty
        compress
}</code></pre>



<p>This will gzip old logs daily and will delete even older logs. If you are not sure about the details, ChatGPT and Bard can explain exactly what each line does.</p>



<p>Note how we use again the <code>uid</code> and <code>guid</code> of the WordPress image.</p>



<p>Let&#8217;s mount this file into our WordPress containers, by adding a line to their volume clause:</p>



<pre class="wp-block-code"><code>volumes:
  - ./wp1fs:/var/www/html
  - ./wordpress.logrotate:/etc/logrotate.d/wordpress

volumes:
  - ./wp2fs:/var/www/html
  - ./wordpress.logrotate:/etc/logrotate.d/wordpress</code></pre>



<h1 class="wp-block-heading">Docker compose recap</h1>



<p>We now have the following <code>docker-compose.yml</code> file:</p>



<pre class="wp-block-code"><code>version: "3.8"

name: droplet

networks:
    net1:
    net2:

volumes:
  db1volume:
  db2volume:

services:

  db1:
    image: mysql:8.2.0
    networks:
      - net1
    restart: unless-stopped
    expose:
      - "3306"
    volumes:
      - db1volume:/var/lib/mysql
      - ./disable-perf-schema.cnf:/etc/mysql/conf.d/disable-perf-schema.cnf
    environment:
      MYSQL_ROOT_PASSWORD: wp1_root_pass
      MYSQL_DATABASE: wp_db1
      MYSQL_USER: db1_user
      MYSQL_PASSWORD: db1_pass
    command: "mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --performance-schema-instrument='%=OFF' --innodb-buffer-pool-size=32M"

  db2:
    image: mysql:8.2.0
    networks:
      - net2
    restart: unless-stopped
    expose:
      - "3306"
    volumes:
      - db2volume:/var/lib/mysql
      - ./disable-perf-schema.cnf:/etc/mysql/conf.d/disable-perf-schema.cnf
    environment:
      MYSQL_ROOT_PASSWORD: wp2_root_pass
      MYSQL_DATABASE: wp_db2
      MYSQL_USER: db2_user
      MYSQL_PASSWORD: db2_pass
    command: "mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --performance-schema-instrument='%=OFF' --innodb-buffer-pool-size=32M"

  wp1:
    image: wordpress:latest
    networks:
      - net1
    depends_on:
      - db1
    user: 1000:1000
    restart: unless-stopped
    expose:
      - "80"
    volumes:
      - ./wp1fs:/var/www/html
      - ./wordpress.logrotate:/etc/logrotate.d/wordpress
    ports:
      - "8101:80"
    environment:
      WORDPRESS_DB_HOST: db1:3306
      WORDPRESS_DB_NAME: wp_db1
      WORDPRESS_DB_USER: db1_user
      WORDPRESS_DB_PASSWORD: db1_pass
      WORDPRESS_DB_COLLATE: utf8mb4_unicode_ci
      WORDPRESS_DEBUG: true

  wp2:
    image: wordpress:latest
    networks:
      - net2
    depends_on:
      - db1
    user: 1000:1000
    restart: unless-stopped
    expose:
      - "80"
    volumes:
      - ./wp2fs:/var/www/html
      - ./wordpress.logrotate:/etc/logrotate.d/wordpress
    ports:
      - "8102:80"
    environment:
      WORDPRESS_DB_HOST: db2:3306
      WORDPRESS_DB_NAME: wp_db2
      WORDPRESS_DB_USER: db2_user
      WORDPRESS_DB_PASSWORD: db2_pass
      WORDPRESS_DB_COLLATE: utf8mb4_unicode_ci
      WORDPRESS_DEBUG: true</code></pre>



<p>We can start this with <code>docker compose up</code> (we must first <code>cd</code> into the same directory as the <code>.yml</code> file).</p>



<p>We can see if it&#8217;s running with <code>docker compose ls</code>, and we can see the containers with <code>docker container ls</code>.</p>



<p>We can inspect memory usage with <code>docker stats</code>.</p>



<p>We can stop the containers with <code>docker compose down</code>.</p>



<p>If we also want to wipe the database volumes and start over, we can do <code>docker compose down -v</code> (DESTRUCTIVE!!!).</p>



<p>We can go into the shell of the first database with:</p>



<pre class="wp-block-code"><code>docker exec -it droplet-db1-1 bash</code></pre>



<p>And then, we can go into the mysql console with</p>



<pre class="wp-block-code"><code>mysql -u root -pwp1_root_pass</code></pre>



<p>We can go into the shell of the first WordPress with:</p>



<pre class="wp-block-code"><code>docker exec -it droplet-wp1-1 bash</code></pre>



<p>If we need to, we can install wp-cli using instructions from <a href="https://wp-cli.org/" target="_blank" rel="noreferrer noopener">https://wp-cli.org/</a>. The copy of <code>wp-cli</code> will not be persisted into the container across restarts. (Note: it&#8217;s possible to add special containers with <code>wp-cli</code> pre-installed, but again this is out of scope of this article. For more information, see the CLI images <a href="https://hub.docker.com/_/wordpress/">here</a>.</p>



<h1 class="wp-block-heading">DaaS (Docker-as-a-Service)</h1>



<p>We don&#8217;t want to have to issue <code>docker compose up</code> every time the server starts, and <code>docker compose down</code> every time the server stops. Let&#8217;s create a <code>systemd</code> unit, so that it runs as a service.</p>



<p>We&#8217;ll create a file named <code>/etc/systemd/system/docker-compose.service</code> with the following carefully crafted contents:</p>



<pre class="wp-block-code"><code>&#91;Unit]
Description=A bunch of containers
After=docker.service
Requires=docker.service

&#91;Service]
Type=oneshot
RemainAfterExit=yes
User=yourusername
ExecStart=/bin/bash -c "docker compose -f /home/yourusername/docker-compose.yml up --detach"
ExecStop=/bin/bash -c "docker compose -f /home/yourusername/docker-compose.yml stop"

&#91;Install]
WantedBy=multi-user.target</code></pre>



<ul class="wp-block-list">
<li>Replace <code>yourusername</code> with your username (duh!).</li>



<li>Replace the description with something less silly (optional).</li>



<li>Note how we only start this service <em>after</em> the docker service starts.</li>



<li>Note that we do a <code>--detach</code>. This will start the containers in the background and exit, without showing the logs of all the containers in the standard output.</li>
</ul>



<p>We can now start the service with</p>



<pre class="wp-block-code"><code>sudo service docker-compose up</code></pre>



<p>And stop it with</p>



<pre class="wp-block-code"><code>sudo service docker-compose down</code></pre>



<p>If we want to see the logs of all the containers, we can type</p>



<pre class="wp-block-code"><code>docker compose logs -f</code></pre>



<p>We should now be able to do <code>curl http://127.0.0.1:8101</code> and see the HTML of the front page of the first WordPress.</p>



<h1 class="wp-block-heading">The reverse proxy</h1>



<p>The database and WordPress containers are running, but they are not yet exposed to the outside world. To do this, we are going to use <code>nginx</code> as a reverse proxy.</p>



<p>The reverse proxy will:</p>



<ul class="wp-block-list">
<li>handle all the redirects that we need</li>



<li>expose the apache2 servers to the outside world</li>



<li>handle the TLS encryption</li>
</ul>



<p>First we setup <a href="https://letsencrypt.org/">Let&#8217;s Encrypt</a>. How to do this is beyond the scope of this article. You can look <a href="https://www.nginx.com/blog/using-free-ssltls-certificates-from-lets-encrypt-with-nginx/">here</a> for a good introduction.</p>



<p>The bottom line is that <code>certbot</code> must be installed, and the following public and private certificate files must exist on your server (host):</p>



<pre class="wp-block-code"><code>/etc/letsencrypt/live/example1.com/fullchain.pem
/etc/letsencrypt/live/example1.com/privkey.pem
/etc/letsencrypt/live/example2.com/fullchain.pem
/etc/letsencrypt/live/example2.com/privkey.pem</code></pre>



<p>These files are actually symlinks to the latest certificate issued. This is all handled by certbot.</p>



<p>Let&#8217;s start to create an nginx config file, which we will place in <code>/etc/nginx/sites-available/reverse-proxy.conf</code>.</p>



<p>We are going to enter several server <em>stanzas</em>, remembering that nginx will use the first one that matches in order from top to bottom.</p>



<h2 class="wp-block-heading">Redirects from http to https</h2>



<p>First, we want any unencrypted requests to port <code>80</code> to do a soft redirect to our <code>https://www.</code> sites.</p>



<pre class="wp-block-code"><code>server {
    listen       80;
    listen       &#91;::]:80;
    server_name example1.com;
    return 302 https://www.example1.com$request_uri;
}

server {
    listen       80;
    listen       &#91;::]:80;
    server_name example2.com;
    return 302 https://www.example2.com$request_uri;
}</code></pre>



<p>The first listen statement is for IPv4, and the second is for IPv6. We redirect to the TLS site, preserving the path segment of the request URI.</p>



<h2 class="wp-block-heading">Proxy forwarding</h2>



<p>Next we are going to enter the stanza that handles the actual site content:</p>



<pre class="wp-block-code"><code>server {
    listen      443 ssl;
    listen      &#91;::]:443 ssl;
    server_name www.example1.com;

    ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;

    location / {
        proxy_pass http://127.0.0.1:8101/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen      443 ssl;
    listen      &#91;::]:443 ssl;
    server_name www.example2.com;

    ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;

    location / {
        proxy_pass http://127.0.0.1:8102/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}</code></pre>



<p>Again, we are listening for <code>443</code> (the TLS port) on both IPv4 and IPv6.</p>



<p>Notice how we only listen for requests to the <code>www.</code> subdomain here.</p>



<p>We use the TLS certificates first, then we specify the reverse proxy in the <code>location /</code> section.</p>



<p>We forward each site to the correct port that we exposed with docker (<code>8101</code> and <code>8102</code> in this case).</p>



<p>We also set some <code>X-</code> headers. This is so that the PHP server knows some details about the client.</p>



<h2 class="wp-block-heading">Redirects from all subdomains to www</h2>



<p>Finally, we want requests from <code>https://example1.com</code>, or from ay other subdomain, such as <code>https://foo.example1.com</code>, to redirect to our <code>www.</code> subdomain:</p>



<pre class="wp-block-code"><code>server {
    listen 443 ssl;
    listen &#91;::]:443 ssl;
    server_name .example1.com;

    ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;

    location / {
        rewrite ^ https://www.example1.com permanent;
    }
}

server {
    listen 443 ssl;
    listen &#91;::]:443 ssl;
    server_name .example2.com;

    ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;

    location / {
        rewrite ^ https://www.example2.com permanent;
    }
}</code></pre>



<p>Here we listen for any subdomain. Note the dot (<code>.</code>) prefix in the <code>server_name</code>.</p>



<p>We again use the TLS certificates, but this time we perform a redirect to the <code>wwww.</code> subdomain.</p>



<h2 class="wp-block-heading">Administering our reverse proxy</h2>



<p>When we are ready to enable our reverse proxy, we will create a symlink to <code>sites-enabled</code>:</p>



<pre class="wp-block-code"><code>sudo ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/reverse-proxy.conf</code></pre>



<p>We can test our syntax to see that it is correct with:</p>



<pre class="wp-block-code"><code>sudo nginx -t</code></pre>



<p>And finally we can restart the nginx server with:</p>



<pre class="wp-block-code"><code>sudo service nginx restart</code></pre>



<p>We can check the status of the server with:</p>



<pre class="wp-block-code"><code>sudo service nginx status</code></pre>



<p>If everything is working correctly, and if the DNS records have had time to propagate, then we can visit our sites and run the famous WordPress installation process:</p>



<ul class="wp-block-list">
<li>https://www.example1.com/</li>



<li>https://www.example2.com/</li>
</ul>



<h1 class="wp-block-heading">Emails</h1>



<p><strong>If only the above was enough.</strong> Sadly, our WordPress installations need a way to send emails, otherwise the webmaster experience is going to suck big time.</p>



<p>I say sadly, because setting up <code>sendmail</code> first on the host is relatively easy, but then setting up SMTP proxies in the WordPress containers is not something I am familiar with. Sorry guys, in the interest of keeping things simple, I&#8217;m going to cheat a little here. Here&#8217;s what I did:</p>



<ul class="wp-block-list">
<li>Install the free <a href="https://wpmailsmtp.com/" target="_blank" rel="noreferrer noopener">WP Mail SMTP</a> plugin on both sites.</li>



<li>Create an application-specific password in my google account.</li>



<li>In the WordPress admin screens, go to: <em>WP Mail SMTP</em> → <em>Mailer</em> → <em>Other SMTP</em>.</li>



<li>Enter the following settings:
<ul class="wp-block-list">
<li>SMTP Host: <code>smtp.gmail.com</code></li>



<li>Encryption: <code>SSL</code></li>



<li>SMTP Port: <code>465</code></li>



<li>Auto TLS: <code>ON</code></li>



<li>Authentication: <code>ON</code></li>



<li>SMTP Username: (my gmail address)</li>



<li>SMTP Password: (the application specific password that I just created).</li>
</ul>
</li>



<li>Hit <em>Save Settings</em>.</li>



<li>Go to <em>WP Mail SMTP</em> → <em>Tools</em> and send a test email.</li>
</ul>



<p>If everything works, then WordPress and its plugins can now send emails. But it will only be able to send email into spam folders, until we add a <a href="https://en.wikipedia.org/wiki/Sender_Policy_Framework" target="_blank" rel="noreferrer noopener">Sender Policy Framework (SPF)</a> record to our DNS entries:</p>



<figure class="wp-block-table"><table><thead><tr><th>Type</th><th>Hostname</th><th>Value</th><th>TTL</th></tr></thead><tbody><tr><td>TXT</td><td>example1.com</td><td>v=spf1 a mx ~all</td><td>1800</td></tr><tr><td>TXT</td><td>example2.com</td><td>v=spf1 a mx ~all</td><td>1800</td></tr></tbody></table><figcaption class="wp-element-caption"><em>Disclaimer: These DNS records are actually not related to sunscreen in any way.</em></figcaption></figure>



<p>The above TXT records tell recipients to treat all emails coming from servers pointed to by the A or MX record of your domains as safe, and others as potentially suspicious. Again, use your favorite AI chatbot to constuct an SPF record that matches your needs.</p>



<h1 class="wp-block-heading">Nothing more permanent than a 301 redirect</h1>



<p>If all works, it&#8217;s now time to turn the soft redirects into permanent (hard) redirects. Edit the reverse proxy config and change any <code>302</code> redirects to <code>301</code>. Any browsers visiting your site will cache these redirects for eternity.</p>



<p>It&#8217;s also now a good time to increase the Time-to-Live of all the DNS records to something like 4 hours, or <code>14400</code> seconds.</p>



<h1 class="wp-block-heading">Backups</h1>



<p>You would think that by now you&#8217;re finished, <strong>but you&#8217;d be wrong</strong>!</p>



<p>Any IT technician worth their <a href="https://en.wikipedia.org/wiki/Salt_(cryptography)" target="_blank" rel="noreferrer noopener">salt</a> knows that they must <a href="https://gist.github.com/nooges/817e5f4afa7be612863a7270222c36ff" target="_blank" rel="noreferrer noopener">backup, and backup often</a>.</p>



<p>First, turn off the server or droplet and take a full backup, snapshot, or whatever. Future you will thank you.</p>



<p>Then, let&#8217;s see how we can take automated daily backups. We can either pay the hosting provider every month to do this for us, or we can spend a few minutes to set up a few cron jobs. Let&#8217;s be cheap and do it manually.</p>



<p>I have a raspberry Pi at home that is always on. It does various things like take backups, ping various services and email me if they are down, trigger wp-cron URLs, control crypto miners, run services I need such as my ticket system, and in general runs any other odd 24/7 task. You should also have one such low-power system. The great thing with Raspberry Pi is that it&#8217;s easy to take out the MicroSD and gzip it into a mechanical disk, so the backup mechanism itself is nicely backed up in its entirety. (<a href="https://knowyourmeme.com/memes/xzibit-yo-dawg" target="_blank" rel="noreferrer noopener">Yo dawg, heard you like backups…</a>)</p>



<p>We&#8217;ll now use our local always-on Linux system to take daily backups of our online filesystems and databases:</p>



<h2 class="wp-block-heading">Local <code>backups.sh</code> script</h2>



<p>First, let&#8217;s create a DB user that only has enough access to take backups from both databases, but no more:</p>



<p>Login to the MySQL consoles of each database and create a <code>wp_bu</code> user that will do backups:</p>



<pre class="wp-block-code"><code>CREATE USER 'wp_bu'@'localhost' IDENTIFIED BY 'SOMESTRONGPASSWORD';
GRANT SELECT, LOCK TABLES ON wp_db1.* TO 'wp_bu'@'localhost';

CREATE USER 'wp_bu'@'localhost' IDENTIFIED BY 'SOMESTRONGPASSWORD';
GRANT SELECT, LOCK TABLES ON wp_db2.* TO 'wp_bu'@'localhost';</code></pre>



<p>We only need SELECT, but since we want to call <code>mysqldump</code> with the <code>--single-transaction</code> argument, we&#8217;ll also need to grant the <code>LOCK TABLES</code> permission. No point in having an ACID database if we&#8217;re going to take backups of an inconsistent state now, is there?</p>



<p>We&#8217;ll now create a bash shell script that does our daily backups. Let&#8217;s place it in our local backup server and call it <code>backups.sh</code>:</p>



<pre class="wp-block-code"><code>#!/bin/bash

# ensure dirs exist
mkdir -p /path-to-backups/cache/wp{1,2}volume /path-to-backups/server

# download DBs to SQL files
ssh -t server "docker exec droplet-wpdb-1 nice -n 19 mysqldump -u wp_bu -pSOMESTRONGPASSWORD --no-tablespaces --single-transaction wp_db1 | nice -n 19 gzip -9 -f" &gt;/path-to-backups/server/wp_db1-`date --rfc-3339=date`.sql.gz
ssh -t server "docker exec droplet-wpdb-2 nice -n 19 mysqldump -u wp_bu -pSOMESTRONGPASSWORD --no-tablespaces  --single-transaction wp_db2 | nice -n 19 gzip -9 -f" &gt;/path-to-backups/server/wp_db2-`date --rfc-3339=date`.sql.gz

# download wp-content files to backup cache
rsync -aq server:~/wp1fs/* /path-to-backups/cache/wp1volume
rsync -aq server:~/wp2fs/* /path-to-backups/cache/wp2volume

# Zip downloaded wp-content files
zip -r9q /path-to-backups/server/wp1-`date --rfc-3339=date`.zip /path-to-backups/cache/wp1volume -x "**/GeoLite2*" -x "**/GeoIPv6.dat"
zip -r9q /path-to-backups/server/wp2-`date --rfc-3339=date`.zip /path-to-backups/cache/wp2volume -x "**/GeoLite2*" -x "**/GeoIPv6.dat"

# prune old DB and FILE backups from local backups
cd /path-to-backups/server &amp;&amp; ls -1tr | head -n -30 | xargs -d '\n' rm -rf -</code></pre>



<p>Again, a lot goes on here. Let&#8217;s unpack:</p>



<ul class="wp-block-list">
<li>The script creates directories <code>server</code> and <code>cache</code> under <code>/path-to-backups</code>. Replace this path with something that points to the directory where you want to keep your backups.</li>



<li>We then <code>ssh</code> to the host using the <code>-t</code> argument because we are in a headless environment (cron). We issue a <code>docker exec</code> command into our databases. Notice how we do not use the <code>-it</code> arguments to <code>docker exec</code>, since this is a headless command (no TTY attached). The command is a <code>mysqldump</code> command that uses the credentials we just created to export the databases in a single transaction each. The SQL output is compressed with maximum compression (<code>-9</code>) and the binary output of <code>gzip</code> is forced (<code>-f</code>) into the standard output, which is then sent over the ssh connection. In our local backups server, we redirect this compressed stream into an <code>.sql.gz</code> file. The file name starts with <code>wp_db1-</code> and includes the current date in <code>YYYY-MM-DD</code> notation. (RFC 3339 is my idea of a perfect date, btw). The <code>--no-tablespaces</code> argument is need in MySQL <code>8.0.21</code> and later, otherwise you&#8217;ll need the PROCESS global permission. (Unless you are using tablespaces you don&#8217;t need it, hence the argument <code>--no-tablespaces</code>.) Notice that we make sure to be <code>nice</code> to other running processes because we don&#8217;t want to impact the performance of the web server with our backups. <code>19</code> is the idle CPU priority.</li>



<li>We then use <code>rsync</code> with the quiet (<code>-q</code>) and archive (<code>-a</code>) flags to copy the files of our WordPress installations into our <code>cache/wp1volume</code> and <code>cache/wp2volume</code> directories. The advantage of using rsync is that only changes to these directories will be transferred.</li>



<li>We then create a zip file for each of these directories. We name the zip files with the prefixes <code>wp1-</code> and <code>wp2-</code> followed again by our idea of a perfect date. Many WordPress plugins include a database of IPs mapped to geographical locations. These files are large and can be found online. If we don&#8217;t want to save these, we can exclude them (<code>-x</code> flag), but this is optional.</li>



<li>Finally we list the files we created (both <code>.sql.gz</code> and <code>.zip</code> files) and we only keep the last 30, deleting any older ones. Since we have two files for each of two databases, this will retain daily backups for the last week or so.</li>
</ul>



<p>Make the script executable with</p>



<pre class="wp-block-code"><code>chmod +x backups.sh</code></pre>



<p>We run the script once, and we check the <code>.sql.gz</code> files using <code>zless</code> and the zip files with <code>unzip -l</code>.</p>



<p>Once we are certain that all data is backed up by the script, we add it to the crontab. Edit the crontab with <code>crontab -e</code> and add the line:</p>



<pre class="wp-block-code"><code>20 4 * * * /bin/bash /home/yourusername/backups.sh</code></pre>



<p>This will execute the backups every day at 4:20 in the morning.</p>



<h2 class="wp-block-heading">Checking the backups</h2>



<p>The server works and is fully backed up. You would think that you&#8217;re done by now. That&#8217;s where <strong>you&#8217;d be wrong again</strong>!</p>



<p>Having backups and not checking them regularly is worse than not having backups at all: You are being lulled into a false sense of security. You may act precariously, thinking that you can always go back to the last backup. However, all backup mechanisms can fail, for any number of reasons.</p>



<p>What I do, is I&#8217;ve set up a weekly reminder in my Google calendar to check the backups. It only takes half a minute per week to ssh into my backup server and do an <code>ls -l</code>, thus ensuring that the latest backups exist, and their file size is what I&#8217;d expect. I keep old backups for about a week, hence the weekly reminder.</p>



<p>I also have another reminder every three months, to backup the MicroSD of my Raspberry Pi backup server. Once every three months, I shutdown the Pi, take out the MicroSD, put it into my work PC, and copy the entire image into a file, stored on my mechanical disk:</p>



<pre class="wp-block-code"><code>sudo dd if=/dev/sdf of=/mnt/bu/rpi-backup-`date --iso-8601=date`.img bs=4096 conv=sync,noerror status=progress
gzip -9 /mnt/bu/rpi-backup-`date --iso-8601=date`.img</code></pre>



<p>Only once I have this process setup I can sleep at night.</p>



<h1 class="wp-block-heading">Are we finished yet?</h1>



<p>By now you would think that we&#8217;re not finished yet, and that there&#8217;s more things to do. <strong>That&#8217;s where you&#8217;d be wrong!</strong></p>



<p>And for anyone wondering, <code>example1.com</code> is actually <a href="https://www.dashed-slug.net" target="_blank" rel="noreferrer noopener">https://www.dashed-slug.net</a> and <code>example2.com</code> is actually this blog, <a href="https://www.alexgeorgiou.gr" target="_blank" rel="noreferrer noopener">https://www.alexgeorgiou.gr</a>. There&#8217;s also a plain nginx container in there that serves static HTML files at <a href="https://wallets-phpdoc.dashed-slug.net" target="_blank" rel="noreferrer noopener">https://wallets-phpdoc.dashed-slug.net</a> .</p>



<p>My config is actually a little bit more complex than the one discussed above. To save some more server memory, I had to put both databases into the same MySQL container, and set up two different DB users with access restricted to each respective database. But you shouldn&#8217;t do this at home, because isolation!</p>



<p>This article is being served by the containers I discussed here, and will be backed up early tomorrow morning, via the mechanism I shared with you above. Which is pretty meta, if you think about it!</p>



<p>I never expected to compose such a long, self-contained article on containers and <code>docker compose</code>. But now it&#8217;s finished and I can hardly contain my excitement!</p>



<p>Thanks for sticking to the end. Hope you enjoyed.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/two-dockerized-wordpress-sites/">📦 Two dockerized WordPress sites, with Let&#8217;s Encrypt, logging, SMTP relay, controlled by a systemd service, and daily backups</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/two-dockerized-wordpress-sites/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
