<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: uludag</title><link>https://news.ycombinator.com/user?id=uludag</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 06:34:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=uludag" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by uludag in "AI has fixed my productivity"]]></title><description><![CDATA[
<p>I had a thought about this coming from the book "Seeing Like a State."<p>Productivity in large organizations has never been and can never be purely of the legible work which is written in Jira tickets, documented, expressed clearly, but is sustained by an illegible network of relationships between the workers and unwritten knowledge/practices. AI can only consume the work which is legible, but as more work gets pushed into this realm, the illegible relationships and expertise becomes fragmented and atrophies, which puts backpressure on the system's productivity as a whole. And reading said book, my guess that attempting to impose perfect legibility for the sake of AI tooling will ultimately prove disastrous.</p>
]]></description><pubDate>Wed, 18 Feb 2026 15:12:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47061828</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=47061828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47061828</guid></item><item><title><![CDATA[New comment by uludag in "I asked Claude Code to remove jQuery. It failed miserably"]]></title><description><![CDATA[
<p>There could be a whole spectrum of types of repositories where these tools exceed and fail. I can immagine a large repository, poorly documented, with confusing inconsistent usages/patterns, in a dynamic language, with poor tests will almost always lead to failure.<p>I honestly think that size and age alone are sufficient to lead these tools into failure cases.</p>
]]></description><pubDate>Fri, 13 Feb 2026 14:07:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47002872</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=47002872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47002872</guid></item><item><title><![CDATA[New comment by uludag in "Eight more months of agents"]]></title><description><![CDATA[
<p>I try to avoid LLMs as much as I can in my role as SWE. I'm not ideologically opposed to switching, I just don't have any pressing need.<p>There are people I work with who are deep in the AI ecosystem and it's obvious what tools they're using It would not be uncharitable in any way to characterize their work as pure slop that doesn't work, buggy, untested adequately, etc.<p>The moment I start to feel behind I'll gladly start adopting <i>agentic AI tools</i>, but as things stand now, I'm not seeing any pressing need.<p>Comments like these make me feel like I'm being gaslit.</p>
]]></description><pubDate>Tue, 10 Feb 2026 07:52:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46956607</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=46956607</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46956607</guid></item><item><title><![CDATA[New comment by uludag in "Eight more months of agents"]]></title><description><![CDATA[
<p>> I am having more fun programming than I ever have, because so many more of the programs I wish I could find the time to write actually exist. I wish I could share this joy with the people who are fearful about the changes agents are bringing.<p>It might be just me but this reads as very tone deaf. From my perspective, CEOs are seething at the mouth to make as many developers redundant as possible, not being shy about this desire. (I don't see this at all as inevitable, but tech leaders have made their position clear)<p>Like, imagine the smugness of some 18th century "CEO" telling an artisan, despite the fact that he'l be resigned to working in horrific conditions at a factory, to not worry and think of all the mass produced consumer goods he may enjoy one day.<p>It's not at all a stretch of the imagination that current tech workers may be in a very precarious situation. All the slopware in the world wouldn't console them.</p>
]]></description><pubDate>Mon, 09 Feb 2026 20:08:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46950432</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=46950432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46950432</guid></item><item><title><![CDATA[New comment by uludag in "Who's Coding on Their Phone?"]]></title><description><![CDATA[
<p>This logic seems reversed though. If someone is primarily vibe coding, why wouldn't a phone be just fine?<p>Either way, there are still completely legitimate reasons why one would want to code on their phone, with or without AI.</p>
]]></description><pubDate>Tue, 03 Feb 2026 17:38:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46874181</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=46874181</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46874181</guid></item><item><title><![CDATA[New comment by uludag in "Needy Programs"]]></title><description><![CDATA[
<p>I'm imagining it even worse: you have to pay a subscription to get your oven to go above a certain temperature and for it to "fast pre-heat" and to not have it show you ads.</p>
]]></description><pubDate>Fri, 14 Nov 2025 17:17:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45929027</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45929027</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45929027</guid></item><item><title><![CDATA[New comment by uludag in "LLMs can get "brain rot""]]></title><description><![CDATA[
<p>Nothing wrong with using LLMs—until every paragraph sounds like it’s A/B tested for LinkedIn virality. That’s the rot setting in.<p>The problem isn’t using AI—it’s sounding like AI trying to impress a marketing department. That’s when you know the loop’s closed.</p>
]]></description><pubDate>Tue, 21 Oct 2025 17:47:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45659011</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45659011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45659011</guid></item><item><title><![CDATA[New comment by uludag in "AI model trapped in a Raspberry Pi"]]></title><description><![CDATA[
<p>I wonder what would happen if there was a concerted effort made to "pollute" the internet with weird stories that have the AI play a <i>misaligned</i> role.<p>Like for example, what would happen if say 100s or 1000s of books were to be released about AI agents working in accounting departments where the AI is trying to make subtle romantic moves towards the human and ends with the the human and agent in a romantic relation which everyone finds completely normal. In this pseudo-genre things totally weird in our society would be written as completely normal. The LLM agent would do weird things like insert subtle problems to get the attention of the human and spark a romantic conversation.<p>Obviously there's no literary genre about LLM agents, but if such a genre was created and consumed, I wonder how would it affect things. Would it pollute the semantic space that we're currently using to try to control LLM outputs?</p>
]]></description><pubDate>Sat, 27 Sep 2025 18:19:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45398189</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45398189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45398189</guid></item><item><title><![CDATA[New comment by uludag in "DeepMind and OpenAI win gold at ICPC"]]></title><description><![CDATA[
<p>... in opposition to the car makers who want to turn everything into highways and parking lots, who really want all forms of human walking to be replaced by automobiles.<p>"They <i>really</i> cant run like a human," they say, "a human can traverse a city in complete silence, needing minimal walking room. Left unchecked, the transitions to cars would ruin our city. So lets be prudent when it comes to adopting this technology."<p>"I'll have none of that. Cars move faster than humans so that means they're better. We should do everything in our power to transition to this obviously superior technology. I mean, a car beat a human at the 100m sprint so bipedal mobility is obviously obsolete," the car maker replied.</p>
]]></description><pubDate>Thu, 18 Sep 2025 05:41:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45285875</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45285875</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45285875</guid></item><item><title><![CDATA[New comment by uludag in "The demo scene is dying, but that's alright"]]></title><description><![CDATA[
<p>The most recent Stack Overflow survey have vim at 25% and neovim at 14% for the question "Which development environments and AI-enabled code editing tools did you use regularly over the past year, and which do you want to work with over the next year?"  Even more interesting is that for the 2023 survey Vim and Neovim were at 22.3% and 11.8% respectively.<p>If the goal is to get more than 50% usage statistics then yeah, you can say they lost, but are dev tools only valid/useful/viable if they have a majority of developers using them? I say they've had tremendous success being able to provide viable tools with literally zero corporate support and a much smaller user base.</p>
]]></description><pubDate>Mon, 08 Sep 2025 13:19:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45167902</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45167902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45167902</guid></item><item><title><![CDATA[New comment by uludag in "Google's new AI mode is good, actually"]]></title><description><![CDATA[
<p>I think this communal subconscious response is coming from a valid place though. I will call the current explosion in AI if:<p><pre><code>  - it causes mass unemployment and social unrest
  - leads to a further concentration of wealth and increase in wealth inequality
  - it means I have to work more, produce more, all for the same wage or less
  - it's implementation leads to large societal harms such as increased isolation/loneliness
  - it ends up being overhyped causes a large economic crisis
</code></pre>
These scenarios aren't fantasy and a lot of them are being talked about. Technologies can just be a net bad. The critics aren't some reactionary, scared  mob against the enlightened. I think a lot of us have seen the playbook tech companies use and our probabilities that a company will end up being just plain bad are a lot higher now.</p>
]]></description><pubDate>Sun, 07 Sep 2025 16:04:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45159390</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45159390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45159390</guid></item><item><title><![CDATA[New comment by uludag in "Why Everybody Is Losing Money On AI"]]></title><description><![CDATA[
<p>I think the main fear is that these products will become so enshitified <i>and</i> engrained into everywhere that, looking back, we'll be wishing we didn't depend so much on the technology.  For example, the Overton window around social media has shifted so much to the point that it's pretty normal to hear views that social media is a net negative to society and we'd be better off without it.<p>Obviously the goal of these companies is to generate as much profit as possible as soon as possible. They will turn the tables eventually. The asymmetry will go in the opposite direction, maybe to the extend that one takes advantage of the current asymmetry.</p>
]]></description><pubDate>Fri, 05 Sep 2025 18:31:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=45141971</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45141971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45141971</guid></item><item><title><![CDATA[New comment by uludag in "Survey: a third of senior developers say over half their code is AI-generated"]]></title><description><![CDATA[
<p>Naive question but wouldn't it could as having AI write 50%+ of your code if you just use an unintelligent complete-the-line AI tool?  In this case the AI is hardly doing anything intelligent, but is still getting credit for doing most of the work.</p>
]]></description><pubDate>Mon, 01 Sep 2025 03:09:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45089082</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45089082</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45089082</guid></item><item><title><![CDATA[New comment by uludag in "Survey: a third of senior developers say over half their code is AI-generated"]]></title><description><![CDATA[
<p>I'm in the same exact boat. I started with a lot of different tools but eventually went back to hand coding everything. When using tools like co-pilot I noticed I would ship a lot mode dumb mistakes. I even experimented with not even using a chat interface and it turns out that a lot of answers to problems are indeed found with a web search.</p>
]]></description><pubDate>Mon, 01 Sep 2025 03:04:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45089055</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45089055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45089055</guid></item><item><title><![CDATA[New comment by uludag in "New research reveals longevity gains slowing, life expectancy of 100 unlikely"]]></title><description><![CDATA[
<p>Not only this, I feel if people in the UK somehow were able to travel back in time and encounter "their culture", they'd feel extremely alienated and maybe even feel a level of disdain. The daily prayers, Bible reading, strict Sabbatarianism and religious festivals would seem completely alien. Without a doubt the modern Muslim or asian immigrant, especially after the first generation, are so much closer to the average UK resident than their traditional culture.</p>
]]></description><pubDate>Sun, 31 Aug 2025 09:39:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45081845</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45081845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45081845</guid></item><item><title><![CDATA[New comment by uludag in "Are OpenAI and Anthropic losing money on inference?"]]></title><description><![CDATA[
<p>Another comment mentioned the cost associated with the model. Setting that aside, wouldn't we also need to include all of the systems around the inference? I can imagine significant infrastructure and engineering needs around all of these various services, along with the work needed to keep these systems up and running.<p>Or are these costs just insignificant compared to inference?</p>
]]></description><pubDate>Thu, 28 Aug 2025 13:44:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=45052108</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=45052108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45052108</guid></item><item><title><![CDATA[New comment by uludag in "I did 98,000 Anki reviews. Anki is already dead"]]></title><description><![CDATA[
<p>Alas, the mote around LLM integration is practically non-existent so I'd think that productization around this would be next to impossible.<p>Anki is already extremely extendable so I would think that with a not too much work deep LLM integration could be implemented in Anki. Like, instead of showing static content for a card, have Anki call an LLM to create the daily iteration of a given prompt.</p>
]]></description><pubDate>Thu, 21 Aug 2025 18:55:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44976633</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=44976633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44976633</guid></item><item><title><![CDATA[New comment by uludag in "Emacs as your video-trimming tool"]]></title><description><![CDATA[
<p>As an Emacs users who often tries to do as many things as possible in Emacs, I would say that the more stuff you can do in Emacs, the more the various features in Emacs compound with each other, giving you more utility.<p>For example, I use the Verb package for making HTTP requests. So with Emacs as my HTTP client, I can do bulk HTTP request calls with keyboard macros. The HTTP requests can be stored in org-mode. I can write custom Elisp for special authentication scenarios. I can create new commands if I need them.<p>For this example, I can imagine (haven't used this myself) scenarios like creating a keyboard macro to shave off the first X seconds of a video usable with dired.<p>Some non-text-editing things in Emacs that are actually extremely useful:<p><pre><code>  - Git via Magit
  - Managing files with Dired
  - Media player with Emms
  - RSS feeds with elfeed
  and the list goes on and on...
</code></pre>
Using a well thought-out Emacs interface for anything is one of the biggest sources of joy in my technical life.</p>
]]></description><pubDate>Tue, 19 Aug 2025 19:27:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=44955335</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=44955335</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44955335</guid></item><item><title><![CDATA[New comment by uludag in "Do things that don't scale, and then don't scale"]]></title><description><![CDATA[
<p>I totally get the point your trying to make. I guess what I'm trying to say is that I think it's unfair/misleading that anything with a veneer of LLMs has all the credit driven to the LLM and not to the thing that provides the bulk of the value.<p>Like for example, clearly you are a very experienced developer with a vast amount of experience. To say that the extent and reach to which you are able to apply technologies is because LLMs seems wrong; it's your rich technical background which allow you to use LLMs in an effective manner.</p>
]]></description><pubDate>Sun, 17 Aug 2025 08:11:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44929843</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=44929843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44929843</guid></item><item><title><![CDATA[New comment by uludag in "Do things that don't scale, and then don't scale"]]></title><description><![CDATA[
<p>I 100% agree with everything in this article, though I'm confused what AI has to do with any of this. People have been doing this sort of thing long before LLMs arrived. Weekend projects doing cool things where definitely a thing long before LLMs. I'd say that cloud services (e.g. Twilio) were the real enablers to these sorts of projects so it seems wrong to be crediting LLMs with this type of work.<p>Cloud services get us from completely impossible to doable with a small amount of work. LLMs maybe save us the time of reading a tutorial or documentation.</p>
]]></description><pubDate>Sun, 17 Aug 2025 04:11:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44928794</link><dc:creator>uludag</dc:creator><comments>https://news.ycombinator.com/item?id=44928794</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44928794</guid></item></channel></rss>