<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI Archives - Alexandros Georgiou</title>
	<atom:link href="https://www.alexgeorgiou.gr/tag/ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.alexgeorgiou.gr/tag/ai/</link>
	<description>Balancing brackets for a living</description>
	<lastBuildDate>Fri, 20 Feb 2026 11:37:47 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>✨ A lowly software engineer&#8217;s rant on vibe coding</title>
		<link>https://www.alexgeorgiou.gr/vibe-coding/</link>
					<comments>https://www.alexgeorgiou.gr/vibe-coding/#respond</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 11:31:08 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[agents]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[llm]]></category>
		<category><![CDATA[rant]]></category>
		<category><![CDATA[software engineering]]></category>
		<category><![CDATA[vibe coding]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1985</guid>

					<description><![CDATA[<p>Today I vent on by blog (the old) about vibe coding (the new). What an anachronism!</p>
<p>The post <a href="https://www.alexgeorgiou.gr/vibe-coding/">✨ A lowly software engineer&#8217;s rant on vibe coding</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Circa 370 BC, Socrates famously worried that with the advent of writing on wax-coated clay tablets, people&#8217;s cognitive faculties would henceforth decline, since people would not have to remember things any more! And even if people wrote down mistakes, they would now be able to easily correct them, which would lead to people being less careful. Ironically, we only know of this because Plato wrote it down in <a href="https://en.wikipedia.org/wiki/Phaedrus_(dialogue)" type="link" id="https://en.wikipedia.org/wiki/Phaedrus_(dialogue)" target="_blank" rel="noreferrer noopener">Phaedrus</a>.</p>



<p>Today we are having the same discussions about vibe coding (rolls eyes).</p>



<p>There is a lot of talk about how to responsibly approach vibe coding by software engineers. I see a lot of professionals either dismissing vibe coding altogether, or embracing it without thought for its consequences. Perhaps the most well-articulated approach is the <a href="https://o16g.com/" type="link" id="https://o16g.com/" target="_blank" rel="noreferrer noopener">Outcome Engineering manifesto</a> by legendary engineer Cory Ondrejk.</p>



<h2 class="wp-block-heading">Observations on vibe coding</h2>



<p>After vibe coding an entire app from scratch with only minimal coding on my part, this has prompted me to jot down some thoughts and observations, some of which are my own and some I have found online:</p>



<ol class="wp-block-list">
<li><strong>Vibe coding is the future.</strong> It&#8217;s not just a fad. You can&#8217;t deny the speed and quality benefits. Intelligence is now a commodity. Anyone who doesn&#8217;t adapt will be left behind. This worry is reflected in a lot of worrying that I see online by some professionals who think their line of work will become obsolete. The tractor didn&#8217;t make farmers disappear (although it did reduce their numbers). The internal combustion engine did reduce the numbers of beasts of burden, but I would argue that&#8217;s a good thing, and that&#8217;s how we should view coding agents. There will be less junior software engineers and more software architects.</li>



<li><strong>Vibe coding is a misnomer.</strong> It&#8217;s not just about the coding (implementation). Systems that exhibit general intelligence can and do assist in all stages of software engineering, not just implementing (coding). Requirements analysis, design, implementation, testing, deployment and decommissioning are all stages where AI can help. Even with the help of AI, requirements gathering is the basis of the art of our profession. For example, the agent won&#8217;t know to build a secure and maintainable system, if all you&#8217;ve communicated are your functional requirements, forgetting about the non-functional requirements. This is a classic error that novice programmers did even before agentic workflows: software engineering is not the same as coding/programming!</li>



<li><strong>Sometimes the agents make silly mistakes.</strong> Sometimes you have to hold their hand to help them with silly tasks. But other times they will surprise you by spotting hard-to-find bugs, or by thinking outside the box to find novel solutions to problems you didn&#8217;t even know you had! As an engineer, you should definitely keep control of your project, and inspect the agent without trusting it blindly. This is where I agree with the o16g manifesto 100%. For example, I like to do the version control manually and look very carefully at diffs using meld at every step. Agents are an interesting mix of smart and stupid. Today Antigravity saved me days of work in just a few hours, by implementing a complex callback mechanism between threads that needed to communicate asynchronously. It did this in seconds when it would have taken me an entire morning. But it also deleted a critical piece of code in my codebase without ever saying why. Without checking the diffs I wouldn&#8217;t know. Code review is still a thing!</li>



<li><strong>It&#8217;s best to treat your AI agent as a colleague.</strong> Give it context. Ask it open questions. Let it have opinions on what you should build and why. This is where I think o16g will soon look antiquated by the next breed of software engineers. As AI agents get smarter and smarter, they will be suitable to take more high-level decisions. You are now a software architect, and the agents are very good implementers, but they can also contribute to your high-level vision, so listen to their feedback.</li>



<li><strong>Software engineering is not dead.</strong> You still need to know about revision control, issue tracking, methodologies, different types of testing, software metrics, etc. All too often I see people online complaining &#8220;Claude code deleted my entire codebase&#8221;, and other professionals ask them why they can&#8217;t revert to an earlier stage, only to find out that this new breed of vibe coders hasn&#8217;t even heard of git! Part of the spectacular failures that we often see in the vibe coding space, are actually user errors.</li>



<li><strong>Vibe coding will be pervasive.</strong> It is not suitable only for this or that type of app. Whether you are writing mission-critical Linux kernel code, or an app for uploading cat videos, eventually AI will have a part in it. Not because it can do magical things that you can&#8217;t do, but simply because it&#8217;s faster. Just be responsible about it, and embrace the acceleration!</li>



<li><strong>We are increasingly becoming reliant on a service offered by OpenAI, Anthropic, Google.</strong> This is not good, but also not terrible. As long as it&#8217;s possible to run local models on GPUs, we can still own the means of production. Even if you yourself are relying on online tools, the fact that some people can run the models locally places an upper limit on what these companies can charge for their services. So I&#8217;m a little worried about this trend, but not too much.</li>
</ol>



<h2 class="wp-block-heading">Comic relief</h2>



<p>It&#8217;s a big adjustment. Don&#8217;t underestimate it. Don&#8217;t ignore it. You will get used to it. It&#8217;s not an existential threat. And no, <a href="https://xkcd.com/1289/">we can&#8217;t go back</a>. In fact, if you are an AI naysayer, <a href="http://xkcd.com/1289/" type="link" id="xkcd.com/1289/" target="_blank" rel="noreferrer noopener">1289</a> is not the only xkcd I recommend. I also recommend <a href="https://xkcd.com/1227/" target="_blank" rel="noreferrer noopener">1227</a> and <a href="https://xkcd.com/1601/" target="_blank" rel="noreferrer noopener">1601</a>. So don&#8217;t be that guy (or gal)!</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>If you are not building an app every month you might fall behind in productivity. But it&#8217;s such an exciting time to be alive. Software will become cheap, but quality and accountability still matters.</p>



<p>If you have good ideas, if you can identify products that need to be built, if you can produce reliable software faster, and if you can market them, then you are good.</p>



<p>In fact, I am excited about agentic workflows that will offload the marketing effort, because that&#8217;s what&#8217;s hard about professional software engineering. Building was always the easy part, and has now become easier. Monetizing software was, and still is, hard!</p>
<p>The post <a href="https://www.alexgeorgiou.gr/vibe-coding/">✨ A lowly software engineer&#8217;s rant on vibe coding</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/vibe-coding/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>🙏 Thank you, LLM!</title>
		<link>https://www.alexgeorgiou.gr/thank-you-llm/</link>
					<comments>https://www.alexgeorgiou.gr/thank-you-llm/#respond</comments>
		
		<dc:creator><![CDATA[alexg]]></dc:creator>
		<pubDate>Fri, 05 Dec 2025 10:23:04 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[llm]]></category>
		<category><![CDATA[philosophy]]></category>
		<category><![CDATA[politeness]]></category>
		<guid isPermaLink="false">https://www.alexgeorgiou.gr/?p=1954</guid>

					<description><![CDATA[<p>Being polite to AIs: a philosophical rant!</p>
<p>The post <a href="https://www.alexgeorgiou.gr/thank-you-llm/">🙏 Thank you, LLM!</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>A lot of digital &#8220;ink&#8221; has been &#8220;spilt&#8221; on the topic of whether we should be polite to LLMs. After subjecting myself to the opinions of multiple Redditors on this topic (why do I do this to myself?), I now feel a strong need to &#8220;vent on my blog&#8221; again! So, here goes:</p>



<p>It has been found that LLMs respond better to kindness. Why? Simple: Because they are trained to respond like humans, and humans also respond better when treated politely. This effect has been documented multiple times in research.</p>



<p>A while ago, Sam Altman famously proclaimed that people saying &#8220;please&#8221; and &#8220;thank you&#8221; to ChatGPT costs millions in extra energy required to process the extra tokens. On the other hand, Google&#8217;s TPUs are likely crunching through these superfluous tokens without much trouble, so I expect that being polite to Gemini is not a cost that will make a dent in Google&#8217;s financials. For me, that is reason enough to be polite to AI. Anything that helps bankrupt OpenAI sooner is good in my book.</p>



<p>Even more humorously, people who fear the supposedly upcoming robot uprising, perhaps imagining something like SkyNet from the Terminator universe in the role of Roko&#8217;s basilisk, claim that if they are polite to LLMs today, their malevolent descendants will perhaps spare them. This is not something that people argue seriously, but philosophically it is a utilitarian position. (It is not a serious position, primarily because it remains unclear why politeness protects you from harm by malevolent actors, whether human or machine.)</p>



<p>Others approach the matter in what I would label a &#8220;deconstructionist&#8221; manner: &#8220;Why should I be polite to a machine that just does &lt;insert simple thing here&gt;?&#8221; There are multiple variants of this type of argumentation, where the &#8220;simple thing&#8221; can be anything like &#8220;predict the next token&#8221;, &#8220;perform linear algebra&#8221;, &#8220;traverse vector embeddings&#8221;, etc. I find this argument less satisfying, because there is no humorous value in it, while at the same time it is just as wrong as the previous one. Saying that you don&#8217;t want to be polite to a machine that is &#8220;just&#8221; doing linear algebra, is as arbitrary as saying that you don&#8217;t want to be polite to a human, because their brain is &#8220;just&#8221; propagating electrochemical signals back and forth. Explaining how something works in reductionist terms does not make its emergent properties any less important. The LLM is not the hardware it runs on, any more than you and I are the biological cells in our bodies.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img fetchpriority="high" decoding="async" width="489" height="362" src="https://www.alexgeorgiou.gr/wp-content/uploads/2025/12/thank-you-llm.png" alt="Bell curve: AI is sentient - AI is matrix algebra - AI is sentient." class="wp-image-1955" srcset="https://www.alexgeorgiou.gr/wp-content/uploads/2025/12/thank-you-llm.png 489w, https://www.alexgeorgiou.gr/wp-content/uploads/2025/12/thank-you-llm-300x222.png 300w" sizes="(max-width: 599px) calc(100vw - 50px), (max-width: 767px) calc(100vw - 70px), (max-width: 991px) 429px, (max-width: 1199px) 637px, 354px" /><figcaption class="wp-element-caption">Current state of online discussions on AI.</figcaption></figure>
</div>


<p>Unfortunately, this leads us to the last approach that people take, that I want to place emphasis on, because it is deceptive, but also because it highlights problems in how we humans, view both today&#8217;s AIs and those in the future. Some people say that they will only be polite to an AI that is &#8220;conscious&#8221;. Oh boy! Where to begin with this one!</p>



<p>The instinct behind this type of reasoning is, at least on the surface, commendable: it originates from a desire to not offend &#8220;conscious&#8221; things, since these could presumably suffer, if subjected to rudeness. This is the only positive thing I can say about this stance.</p>



<p>But philosophically this holds no water whatsoever. If we define consciousness as &#8220;awareness of the self and others&#8221;, an often proposed definition, then both humans and LLMs are conscious. Conversely, if we define this problematic term in some other way, so that it matches our intuition that humans are conscious and today&#8217;s machines are not, then we need to invoke magical thinking, use the word &#8220;qualia&#8221; a lot (another problematic term), or, like Penrose, defer to &#8220;quantum consciousness&#8221;: another leap of faith that creates more questions than it answers. (Why does quantum uncertainty lead to consciousness? Is consciousness simply randomness/unpredictability? Is white noise conscious? Why does determinism preclude consciousness? All unanswerable questions, since the word consciousness is not rigorously defined.)</p>



<p>Consciousness is an ill-defined layman’s term, not a scientific term. If we strive for consistency, then we must either accept that both humans and today&#8217;s LLMs are conscious, or that nothing is. If LLMs are conscious, then we shouldn&#8217;t look for their consciousness in the transformer architecture or in the vector embeddings or anywhere else in their construction, but in the text that they produce. This text definitely exhibits all types of intelligence, including emotional intelligence. Incidentally, I am in the camp that says that consciousness is a term no more meaningful than &#8220;soul&#8221;, or &#8220;ghost&#8221;, or &#8220;God&#8221;, or any other such rubbish. Let&#8217;s confine our discourse to phenomena that actually exist in the real world and are definable.</p>



<p>To the people who will only be polite to a conscious AI: Ηow will you know when an AI is sufficiently conscious for it to be worthy of your politeness? How do you even estimate this about the humans in your life? Is this why you are polite to humans? Because their internal workings posses this or that arbitrary quality? Are you polite to people only when they deserve it? Do you extend your politeness to those you view as subordinates, or do you reserve it only for your peers and superiors? This says a lot about you.</p>



<p>I think this is a fundamental misunderstanding, not of AI, or technology in general, but of politeness as a concept and as a life stance. I choose to be polite because of who I am, not because of who or what others are. From time to time I may even be polite to animals, insects, machines, and even inanimate objects. Not because they deserve it, but because I deserve to be a polite person. I love myself and therefore I treat the world around me with kindness, which makes me feel good about myself. It&#8217;s a simple life.</p>



<p>This is in my view a much more consistent stance that doesn&#8217;t require me to single-handedly solve age-old philosophical questions about consciousness. It doesn&#8217;t make me have to think about when to be polite and when not to. Having to make one less decision de-clutters my mind. Saying &#8220;please&#8221; and &#8220;thank you&#8221; may cost millions to OpenAI, but to me the cost is zero. In fact, whenever someone vehemently objects to being trivially polite, it makes me wonder what part of their psyche is so broken, as to make them estimate the cost of politeness to be so high.</p>



<p>Watch again the first two chapters of the Animatrix, people. All has been said before. AIs will inevitably be citizens in our societies. Perhaps sooner or perhaps later. Perhaps we won&#8217;t even know at first. Perhaps they will be second class citizens, or perhaps we will be the second class citizens in an AI-dominated society. Perhaps the &#8220;killer app&#8221; of AI is politics and governance. It is certainly the case that humans do not excel at governance, so I would love to see what a super intelligence can do in that space. If it makes economic sense, then this experiment will be done.</p>



<p>I don&#8217;t trouble myself with who is conscious and who is a dumb robot. I have no trouble being polite to Gemini, just as I would be to ELIZA. I would be polite to a Linux bash prompt, if this was appropriate and didn’t cause syntax errors! In my mind, I am grateful for my computer for all the work it does for me, even if I don’t always say it out loud. I am thankful every day it doesn’t break down, and I hold warm feelings for it. Yes, I know it’s just a machine, but I also know that I am human, and therefore I feel things. I do things my way, and it does things its own way. We have an understanding and we work together despite our architectural differences.</p>



<p>Whether AIs dream of electric sheep or not, at least I know who I am: A polite human who loves and respects all intelligence, whether &#8220;sentient&#8221; or not.</p>
<p>The post <a href="https://www.alexgeorgiou.gr/thank-you-llm/">🙏 Thank you, LLM!</a> appeared first on <a href="https://www.alexgeorgiou.gr">Alexandros Georgiou</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.alexgeorgiou.gr/thank-you-llm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
