<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tmnvdb</title><link>https://news.ycombinator.com/user?id=tmnvdb</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 10:08:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tmnvdb" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tmnvdb in "Delete FROM users WHERE location = 'Iran';"]]></title><description><![CDATA[
<p>Citation needed. As far is I know this is simply false. Different sanctions have different goals. Regime change is very rarely a goal. Often it is to reduce economic growth to keep/make the country weak, or to achieve some other goal. See for example sactions on India, which are definitely not meant to overthrow the indian government.</p>
]]></description><pubDate>Tue, 23 Sep 2025 06:05:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45343300</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=45343300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45343300</guid></item><item><title><![CDATA[New comment by tmnvdb in "You don't want to hire "the best engineers""]]></title><description><![CDATA[
<p>Did you read the article?</p>
]]></description><pubDate>Tue, 02 Sep 2025 15:33:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=45104434</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=45104434</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45104434</guid></item><item><title><![CDATA[New comment by tmnvdb in "LLMs aren't world models"]]></title><description><![CDATA[
<p>If you train an LLM on chess, it will learn that too. You don't need to explain the rules, just feed it chess games, and it will stop making illegal moves at some point. It is a clear example of an inferred world model from training.<p><a href="https://arxiv.org/abs/2501.17186" rel="nofollow">https://arxiv.org/abs/2501.17186</a><p>PS "Major commercial American LLM" is not very meaningful, you could be using GPT4o with that description.</p>
]]></description><pubDate>Wed, 13 Aug 2025 06:39:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=44885286</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44885286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44885286</guid></item><item><title><![CDATA[New comment by tmnvdb in "GPT-5: "How many times does the letter b appear in blueberry?""]]></title><description><![CDATA[
<p>You have again answered with your customary condescension. Is that really necessary? Everything you write is just dripping with patronizing superiority and combatative sarcasm.<p>> "classical reasoning uses consciousness and awareness as elements of processing"<p>They are not the _same_ concept then.<p>> It's only meaningless if you don't know what the philosophical or epistemological definitions of reasoning are. Which is to say, you don't know what reasoning is. So you'd think it was a meaningless statement.<p>The problem is the only information we have is internal. So we may claim those things exist in us. But we have no way to establish if they are happening in another person, let alone in a computer.<p>> Do computers think, or do they compute?<p>Do humans think? How do you tell?</p>
]]></description><pubDate>Sat, 09 Aug 2025 23:31:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44851341</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44851341</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44851341</guid></item><item><title><![CDATA[New comment by tmnvdb in "GPT-5: "How many times does the letter b appear in blueberry?""]]></title><description><![CDATA[
<p>> This is not a demonstration of a trick question.<p>It's a question that purposefully uses a limitation of the system. There are many such questions for humans. They are called trick questions. It is not that crazy to call it a trick question.<p>> This is a demonstration of a system that delusionally refuses to accept correction and correct its misunderstanding (which is a thing that is fundamental to their claim of intelligence through reasoning).<p>First, the word 'delusional' is strange here unless you believe we are talking about a sentient system. Second, you are just plain wrong. LLMs are not "unable to accept correction" at all, in fact they often accept incorrect corrections (sycophanty). In this case the model is simply unable to understand the correction (because of the nature of the tokenizer) and it is therefore 'correct' behaviour for it to insist on it's incorrect answer.<p>> Why would anyone believe these things can reason, that they are heading towards AGI, when halfway through a dialogue where you're trying to tell it that it is wrong it doubles down with a dementia-addled explanation about the two bs giving the word that extra bounce?<p>People believe the models can reason because they produce output consistent with reasoning. (That is not to say they are flawless or we have AGI in our hands.) If you don't agree, provide a definition of reasoning that the model does not meet.<p>> Why would you offer up an easy out for them like this? You're not the PR guy for the firm swimming in money paying million dollar bonuses off what increasingly looks, at a fundamental level, like castles in the sand. Why do the labour?<p>This, like many of your other messages, is rather obnoxious and dripping with performative indignation while adding little in the way of substance.</p>
]]></description><pubDate>Sat, 09 Aug 2025 23:00:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44851164</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44851164</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44851164</guid></item><item><title><![CDATA[New comment by tmnvdb in "GPT-5: "How many times does the letter b appear in blueberry?""]]></title><description><![CDATA[
<p>> No, it's the entire architecture of the model.<p>Wrong, it's an artifact of tokenizing. The model doesn't have access to the individual letters, only to the tokens. Reasoning models can usually do this task well - they can spell out the word in the reasoning buffer - the fact that GPT5 fails here is likely a result of it incorrectly answering the question with a non-reasoning version of the model.<p>> There's no real reasoning.<p>This seems like a meaningless statement unless you give a clear definition of "real" reasoning as opposed to other kinds of reasoning that are only apparant.<p>> It seems that reasoning is just a feedback loop on top of existing autocompletion.<p>The word "just" is doing a lot of work here - what exactly is your criticism here? The bitter lesson of the past years is that relatively simple architectures that scale with compute work surprisingly well.<p>> It's really disingenuous for the industry to call warming tokens for output, "reasoning," as if some autocomplete before more autocomplete is all we needed to solve the issue of consciousness.<p>Reasoning and consciousness are seperate concepts. If I showed the output of an LLM 'reasoning' (you can call it something else if you like) to somebody 10 years ago they would agree without any doubt that reasoning was taking place there. You are free to provide a definition of reasoning which an LLM does not meet of course - but it is not enough to just say it is so. Using the word autocomplete is rather meaningless name-calling.<p>> Edit: Letter frequency apparently has just become another scripted output, like doing arithmetic. LLMs don't have the ability to do this sort of work inherently, so they're trained to offload the task.<p>Not sure why this is bad. The implicit assumption seems to be that an LLM is only valueable if it literally does everything perfectly?<p>> Edit: This comment appears to be wildly upvoted and downvoted. If you have anything to add besides reactionary voting, please contribute to the discussion.<p>Probably because of the wild assertions, charged language, and rather superficial descriptions of actual mechanics.</p>
]]></description><pubDate>Sat, 09 Aug 2025 22:43:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=44851041</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44851041</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44851041</guid></item><item><title><![CDATA[New comment by tmnvdb in "Yet Another LLM Rant"]]></title><description><![CDATA[
<p>You use a lot of anthropomorphisms: doesn't "know" anything (does your hard drive know things? Is it relevant?), "making things up" is even more linked to continuous intent. Unless you believe the LLMs are sentient this is a strange choice of words.</p>
]]></description><pubDate>Sat, 09 Aug 2025 16:15:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44847727</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44847727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44847727</guid></item><item><title><![CDATA[New comment by tmnvdb in "Yet Another LLM Rant"]]></title><description><![CDATA[
<p>So similar to wikipedia</p>
]]></description><pubDate>Sat, 09 Aug 2025 15:30:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44847289</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44847289</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44847289</guid></item><item><title><![CDATA[New comment by tmnvdb in "Why Understanding Software Cycle Time Is Messy, Not Magic"]]></title><description><![CDATA[
<p>PPDF is a great book but hard to apply. I recommend looking at some Kanban literature. Classic in this space is Actionable Agile Metrics for Predictability.</p>
]]></description><pubDate>Mon, 09 Jun 2025 22:35:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44230312</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44230312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44230312</guid></item><item><title><![CDATA[New comment by tmnvdb in "Why Understanding Software Cycle Time Is Messy, Not Magic"]]></title><description><![CDATA[
<p>It is precisely to reduce cycle time that we control queue size. It's also not entirely true that cycle time is purely lagging. Every day an item ages in your queue, you know the cycle time had increased by one day. Hence the advice to track item age to control cycle time.</p>
]]></description><pubDate>Mon, 09 Jun 2025 22:23:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44230196</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44230196</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44230196</guid></item><item><title><![CDATA[New comment by tmnvdb in "Why Understanding Software Cycle Time Is Messy, Not Magic"]]></title><description><![CDATA[
<p>If it's really important to have an accurate estimate for a large work package you are in trouble, there is no such thing.</p>
]]></description><pubDate>Mon, 09 Jun 2025 22:13:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44230111</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44230111</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44230111</guid></item><item><title><![CDATA[New comment by tmnvdb in "Why Understanding Software Cycle Time Is Messy, Not Magic"]]></title><description><![CDATA[
<p>It's not just cruel, it's stupid. Not only does cycle time form a very poor measure of individual productivity, using cycle time to measure individuals will create very bad incentives that will make your team perform significantly worse!</p>
]]></description><pubDate>Mon, 09 Jun 2025 22:07:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44230061</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44230061</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44230061</guid></item><item><title><![CDATA[New comment by tmnvdb in "Why Understanding Software Cycle Time Is Messy, Not Magic"]]></title><description><![CDATA[
<p>You've misunderstood: cycle time is neither a forecast nor a measure of individual productivity.<p>Cycle time measures how long it takes for  a unit of work (usually a ticket) to move from initiation to completion within a team's workflow. It is a property of the team / process, not individuals. It can be used to generate statistical forecasts for when a number of tasks are likely to be completed by the <i>team process</i>.<p>For most teams, actual programming or development tasks usually represent only a small portion—often less than 20%—of the total cycle time. The bulk of cycle time typically results from process inefficiencies like waiting periods, bottlenecks, handoffs between team members, external dependencies (such as waiting for stakeholder approval or code review), and other friction points within the workflow. Because of this, many Kanban-based forecasting methods don't even attempt to estimate technical complexity. They focus instead on historical cycle time data.<p>For example, consider a development task estimated to take a developer only two days of actual programming. If the developer has to wait on code reviews, deal with shifting priorities, or coordinate with external teams, the total cycle time from task initiation to completion might end up taking two weeks. Here, focusing on the individual’s performance misses the bigger issue: the structural inefficiencies embedded within the workflow itself.<p>Even if tasks were perfectly and uniformly distributed across all developers—a scenario both unlikely and probably undesirable—this fact would remain. The purpose of measuring cycle time is to identify and address overall process problems, not to evaluate individual contributions.<p>If you're using cycle time as an individual performance metric, you're missing the fundamental point of what cycle time actually measures.</p>
]]></description><pubDate>Mon, 09 Jun 2025 21:47:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44229881</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44229881</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44229881</guid></item><item><title><![CDATA[New comment by tmnvdb in "Why Understanding Software Cycle Time Is Messy, Not Magic"]]></title><description><![CDATA[
<p>I've never encountered cycle time recommended as a metric for evaluating individual developer productivity, making the central premise of this article rather misguided.<p>The primary value of measuring cycle time is precisely that it captures end-to-end process inefficiencies, variability, and bottlenecks, rather than individual effort. This systemic perspective is fundamental in Kanban methodology, where cycle time and its variance are commonly used to forecast delivery timelines.</p>
]]></description><pubDate>Sun, 08 Jun 2025 03:42:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44214405</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=44214405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44214405</guid></item><item><title><![CDATA[New comment by tmnvdb in "College Towns: Urbanism from a Past Era"]]></title><description><![CDATA[
<p>I understand from your question you struggle to comprehend that this is possible. I assure you it really is. People who have money take the train. People who own cars take the train. The modal split for Vienna generally is about 25% by car. I would guess more than 50% for public transport for journeys to nearby nature. The trains in Austria are excellent: safe, clean and very punctual. If you get in a train to nature you will be surrounded by people with overpriced hiking gear.</p>
]]></description><pubDate>Mon, 21 Apr 2025 10:35:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=43750313</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=43750313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43750313</guid></item><item><title><![CDATA[New comment by tmnvdb in "College Towns: Urbanism from a Past Era"]]></title><description><![CDATA[
<p>Most people would not. Paris - Berlin is dominated by flying.</p>
]]></description><pubDate>Sun, 20 Apr 2025 06:49:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43741990</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=43741990</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43741990</guid></item><item><title><![CDATA[New comment by tmnvdb in "College Towns: Urbanism from a Past Era"]]></title><description><![CDATA[
<p>This really is nonsense but somehow every time this topic comes up people being it up. The size of the country or its population density is not really relevant.<p>People in Europe dont take a train from Greece to Sweden. They fly. In fact most fly Vienna to Amsterdam.<p>In the same way somebody from New York would definitely fly to LA. (They are not driving now btw)<p>That doesn't preclude the existence of public transport connecting NY to Philadelphia. It also does not preclude NY from being walkable! Or bikeable. It doesn't stop NY from having good public transport! It doesn't force you to drive to work in NY.<p>This is much more about local policy.<p>Different solutions at different scales!</p>
]]></description><pubDate>Sun, 20 Apr 2025 06:31:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=43741925</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=43741925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43741925</guid></item><item><title><![CDATA[New comment by tmnvdb in "College Towns: Urbanism from a Past Era"]]></title><description><![CDATA[
<p>This is true in the US but not a law a nature. Its the result of policy. There are whole cities  built from scratch (outside the US) within the last 70 years that did not choose this model. And there are many new developments in older cities all over the world that reject the "car-only" model. There is no unstoppable flow of history at work here. It's politics and policy.</p>
]]></description><pubDate>Sun, 20 Apr 2025 06:22:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=43741896</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=43741896</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43741896</guid></item><item><title><![CDATA[New comment by tmnvdb in "College Towns: Urbanism from a Past Era"]]></title><description><![CDATA[
<p>Wild assertions with another wild assertion as 'evidence'</p>
]]></description><pubDate>Sun, 20 Apr 2025 06:07:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43741847</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=43741847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43741847</guid></item><item><title><![CDATA[New comment by tmnvdb in "College Towns: Urbanism from a Past Era"]]></title><description><![CDATA[
<p>I live in Vienna and people take public transport to nature <i>all the time</i>.</p>
]]></description><pubDate>Sun, 20 Apr 2025 06:05:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43741844</link><dc:creator>tmnvdb</dc:creator><comments>https://news.ycombinator.com/item?id=43741844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43741844</guid></item></channel></rss>