<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tveita</title><link>https://news.ycombinator.com/user?id=tveita</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 00:24:42 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tveita" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by tveita in "Is it a pint?"]]></title><description><![CDATA[
<p><a href="https://en.wikipedia.org/wiki/Fill_line" rel="nofollow">https://en.wikipedia.org/wiki/Fill_line</a><p>Selling drinks in mislabeled containers should warrant a fraud report to your local consumer protection agency. A crowdsourcing app seems like the wrong tool here.</p>
]]></description><pubDate>Mon, 23 Mar 2026 16:45:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47491928</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47491928</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47491928</guid></item><item><title><![CDATA[New comment by tveita in "Apply video compression on KV cache to 10,000x less error at Q4 quant"]]></title><description><![CDATA[
<p>"video compression" by analogy only, what this claims to actually do is delta encode the values in each token from the previous token.<p>Interesting idea, but the results seem almost suspicious? even accounting for the extra bits used to store the 16-bit start value for each block - ~5% for k=64<p>The code does funky things, like the encoder updates the reference value for each encoded token, using the non-quantized value! [1]
But the decoder just ignored all that. [2] how can this work?<p>[1] <a href="https://github.com/cenconq25/delta-compress-llm/commit/f185ffcd4cf5ee8041a95d93008bbcc0914d04e4#diff-7974ac143ef46eaf6e413b2aa0aa7bfe1e81958925597f6b922f14886ee53883R111" rel="nofollow">https://github.com/cenconq25/delta-compress-llm/commit/f185f...</a><p>[2] <a href="https://github.com/cenconq25/delta-compress-llm/commit/f185ffcd4cf5ee8041a95d93008bbcc0914d04e4#diff-7974ac143ef46eaf6e413b2aa0aa7bfe1e81958925597f6b922f14886ee53883R160" rel="nofollow">https://github.com/cenconq25/delta-compress-llm/commit/f185f...</a></p>
]]></description><pubDate>Mon, 23 Mar 2026 10:05:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47487360</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47487360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47487360</guid></item><item><title><![CDATA[New comment by tveita in "I was interviewed by an AI bot for a job"]]></title><description><![CDATA[
<p>Sure, for instance, if all of them go through an 1 hour AI interview, then you might find a better candidate, at the cost of 1000 man-hours of work. You hire that person, another company opens a position, gets 999 applicants, send them all their own AI interview, and so forth.<p>How much would better would your hire be considering that you managed to check all 1000 of them, rather than just 50?<p>Assume that candidate fitness is a number normally distributed around 0 (half of them obviously being negative), that both you and the AI can perfectly pick out the best candidate, and that you picked the 50 to interview completely at random. The average actually seems to be around 40% better? Suprisingly decent. Is that improvement worth 1000 man-hours?<p>So attempt two here: maybe instead of each company sending candidates through an interview, there should be a common gatekeeper. All working age people take the same 1-hour AI interview, and the glorious overseer assigns them to the position they are best suited for.<p>(An actual answer here is you assess how important it is to get "the best candidate", and you interview enough people to get a reasonable approximation. The hour cost on your side is what keeps you honest. If wasting candidate time is free on your side, you're going to waste 500 man-hours of work for a 5% better result for you.)</p>
]]></description><pubDate>Thu, 12 Mar 2026 11:28:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47349179</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47349179</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47349179</guid></item><item><title><![CDATA[New comment by tveita in "I was interviewed by an AI bot for a job"]]></title><description><![CDATA[
<p>I absolutely agree in principle, but I understand that the companies are also seeing a lot more applicants trying to skate past screening and interviews with AI assistance.<p>Connecting verified humans for a mutually respectful chat is a trust problem that companies like LinkedIn should be creating solutions for, instead of offering both sides automated shovels to shovel slop faster.</p>
]]></description><pubDate>Thu, 12 Mar 2026 10:28:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348726</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47348726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348726</guid></item><item><title><![CDATA[New comment by tveita in "I was interviewed by an AI bot for a job"]]></title><description><![CDATA[
<p>> They are the ones who started using AI in the hiring process<p>Aren't you ignoring the reports of companies receiving thousands of ChatGPT-written resumes, bots sending applications, and interviews with applicants being live coached by AI?<p>This is a breakdown of trust on both sides.</p>
]]></description><pubDate>Thu, 12 Mar 2026 10:23:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348686</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47348686</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348686</guid></item><item><title><![CDATA[New comment by tveita in "Many SWE-bench-Passing PRs would not be merged"]]></title><description><![CDATA[
<p>Probably more like the long tail of software - software that was created for a particular purpose in a particular domain by a single person in the company who also happened to know programming - maybe just as Excel macros.<p>I strongly assume the long tail is shifting and expanding now and will eventually mostly be software for one-off purposes authored by people who <i>don't</i> know how to code, and probably have a poor understanding of how it actually works.</p>
]]></description><pubDate>Thu, 12 Mar 2026 08:54:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348098</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47348098</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348098</guid></item><item><title><![CDATA[New comment by tveita in "Lazy JWT Key Rotation in .NET: Redis-Powered JWKS That Just Works"]]></title><description><![CDATA[
<p>There's some odd choices here.<p><pre><code>  - 90 days is a very long time to keep keys, I'd expect rotation maybe between 10 minutes and a day? I don't see any justification for this in the article.
  - There's no need to keep any private keys except the current signing key and maybe an upcoming key. Old keys should be deleted on rotation, not just left to eventually expire.
  - https://github.com/aaroncpina/Aaron.Pina.Blog.Article.08/blob/776e3b365d177ed3b779242181f0045cd6387b3f/Aaron.Pina.Blog.Article.08.Server/Program.cs#L70-L77 - You're not allowed to get a new token if you have a a token already? That's unworkable - what if you want to log in on a new device? Or what if the client fails to receive the token request after the server sends it, the classic snag with use-only-once tokens?
  - A fun thing about setting an expiry on the keys is that it makes them eligible for eviction with Redis' standard volatile-lru policy. You can configure this, but it would make me nervous.</code></pre></p>
]]></description><pubDate>Tue, 10 Mar 2026 10:22:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47321323</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47321323</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47321323</guid></item><item><title><![CDATA[New comment by tveita in "Agentic Engineering Patterns"]]></title><description><![CDATA[
<p>I've definitely seen Opus go to town when asked to test a fairly simple builder.
Possibly it inferred something about testing the "contract", and went on to test such properties as<p><pre><code>  - none of the "final" fields have changed after calling each method
  - these two immutable objects we just confirmed differ on a property are not the same object
</code></pre>
In addition to multiple tests with essentially identical code, multiple test classes with largely duplicated tests etc.</p>
]]></description><pubDate>Wed, 04 Mar 2026 23:20:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47255404</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47255404</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47255404</guid></item><item><title><![CDATA[New comment by tveita in "Weave – A language aware merge algorithm based on entities"]]></title><description><![CDATA[
<p>>  Elijah Newren, who wrote git's merge-ort (the default merge strategy), reviewed weave and said language-aware content merging is the right approach, that he's been asked about it enough times to be certain there's demand, and that our fallback-to-line-level strategy for unsupported languages is "a very reasonable way to tackle the problem." Taylor Blau from the Git team said he's "really impressed" and connected us with Elijah. The creator of libgit2 starred the repo. Martin von Zweigbergk (creator of jj) has also been excited about the direction.<p>Are any of these statements public, or is this all private communication?<p>> We are also working with GitButler team to integrate it as a research feature.<p>Referring to this discussion, I assume: <a href="https://github.com/gitbutlerapp/gitbutler/discussions/12274" rel="nofollow">https://github.com/gitbutlerapp/gitbutler/discussions/12274</a></p>
]]></description><pubDate>Wed, 04 Mar 2026 16:13:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47249641</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47249641</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47249641</guid></item><item><title><![CDATA[New comment by tveita in "Show HN: Respectify – A comment moderator that teaches people to argue better"]]></title><description><![CDATA[
<p>I'll wager that 95% of incitive and unhelpful comments aren't written by "bad faith actors" as you define them, but ordinary people carried away by emotions or mob sentiment.<p>Just a reminder that "this probably isn't worth replying to" should help a lot. But alas, it would directly reduce precious engangement.</p>
]]></description><pubDate>Thu, 26 Feb 2026 11:21:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47164576</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47164576</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47164576</guid></item><item><title><![CDATA[New comment by tveita in "100M-Row Challenge with PHP"]]></title><description><![CDATA[
<p>> Also, the generator will use a seeded randomizer so that, for local development, you work on the same dataset as others<p>Except that the generator script generates dates relative to time() ?</p>
]]></description><pubDate>Wed, 25 Feb 2026 13:36:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47151267</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47151267</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47151267</guid></item><item><title><![CDATA[New comment by tveita in "Looking Back on Phabricator for Code Review"]]></title><description><![CDATA[
<p>> In GitHub, you have to switch tabs (which is slow and distracting) to go between the PR summary and the code.<p>As a case study of Github UI friction, take merging a Dependabot PR from the PRs tab, with code approval required before merges. By my count this takes 6 clicks, and none of them approach a 'snappy' response time.<p>This is for mostly trivial single-line diffs. The entire thing could be 1 click - a hover preview on the PR list, and an 'approve and merge' button.<p>(To list them out: Click PR, "Files changed", "Submit review", "Approve", "Submit Review", "Merge")</p>
]]></description><pubDate>Mon, 23 Feb 2026 00:22:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47116400</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47116400</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47116400</guid></item><item><title><![CDATA[New comment by tveita in "Large Language Model Reasoning Failures"]]></title><description><![CDATA[
<p>You could start by reading research on the topic instead of disregarding expert opinion based on your own gut feeling<p>E.g. <a href="https://www.anthropic.com/research/tracing-thoughts-language-model#mental-math" rel="nofollow">https://www.anthropic.com/research/tracing-thoughts-language...</a></p>
]]></description><pubDate>Sat, 21 Feb 2026 15:42:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47101789</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47101789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47101789</guid></item><item><title><![CDATA[New comment by tveita in "15 years later, Microsoft morged my diagram"]]></title><description><![CDATA[
<p>And LinkedIn is Microsoft as well...<p>IMO Microsoft is right at the nexus of opportunity for solving some of the the large _problems_ that AI introduces.<p>Employers and job seekers both need a way to verify that they are talking to real identified people that are willing to put in some effort beyond spamming AI or wasting your time on AI run filters. LinkedIn could help them.<p>Programmers need access to real human-verified code and projects they can trust, not low-effort slop that could be backdoored at any moment by people with unclear motives and provenance. Github could help.<p>etc. etc. for Office, Outlook ...<p>But instead they've decided to ride the slop waves, throw QA to the wind, and call every bird and stone "copilot".</p>
]]></description><pubDate>Wed, 18 Feb 2026 18:13:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47064174</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=47064174</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47064174</guid></item><item><title><![CDATA[New comment by tveita in "Claude Code is suddenly everywhere inside Microsoft"]]></title><description><![CDATA[
<p>The Copilot IntelliJ integration on the other hand is atrocious: <a href="https://plugins.jetbrains.com/plugin/17718-github-copilot--your-ai-pair-programmer" rel="nofollow">https://plugins.jetbrains.com/plugin/17718-github-copilot--y...</a><p>I'm amazed that a company that's supposedly one of the big AI stocks seemingly won't spare a single QA position for a major development tool. It really validates Claude's CLI-first approach.</p>
]]></description><pubDate>Mon, 02 Feb 2026 15:45:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46857280</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=46857280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46857280</guid></item><item><title><![CDATA[New comment by tveita in "Clawdbot Renames to Moltbot"]]></title><description><![CDATA[
<p>We might not be far from the first prompt worm</p>
]]></description><pubDate>Wed, 28 Jan 2026 17:56:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=46799024</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=46799024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46799024</guid></item><item><title><![CDATA[New comment by tveita in "AI Usage Policy"]]></title><description><![CDATA[
<p>I think people's attitude would be better calibrated to reality if LLM providers were legally required to call their service "a random drunk guy on the subway"<p>E.g.<p>"A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"</p>
]]></description><pubDate>Fri, 23 Jan 2026 14:45:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=46733089</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=46733089</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46733089</guid></item><item><title><![CDATA[New comment by tveita in "Stevey's Birthday Blog"]]></title><description><![CDATA[
<p>> Indeed, the Gas-Town token is down 97% from all-time high,<p>What else could possibly have happened? Surely every one put their money in with the express intention of participating in a pump and dump.<p>Not taking the money would have been the high road. I don't think basing the economy on gambling and scams is good for society.  But who could realistically claim to be a 'victim' here?</p>
]]></description><pubDate>Thu, 22 Jan 2026 14:13:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46719476</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=46719476</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46719476</guid></item><item><title><![CDATA[New comment by tveita in "Trump's Letter to Norway Should Be the Last Straw"]]></title><description><![CDATA[
<p>It's going to be difficult to explain to anyone looking back on this time how they managed to keep up the pretense that this was a functional adult with his faculties intact.<p>Here you are imbuing a "master of FUD" angle to a letter that might as well be written with crayons.</p>
]]></description><pubDate>Mon, 19 Jan 2026 20:10:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=46683845</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=46683845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46683845</guid></item><item><title><![CDATA[New comment by tveita in "OpenAI Must Turn over 20M ChatGPT Logs, Judge Affirms"]]></title><description><![CDATA[
<p>Between state actors, AI competitors and criminals, there's a lot of people who wouldn't mind access to a dump of 20M ChatGPT logs.<p>Showing substantial copyright infringement of the New York Times seems about the only thing it wouldn't be any good for.</p>
]]></description><pubDate>Tue, 06 Jan 2026 22:21:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46519672</link><dc:creator>tveita</dc:creator><comments>https://news.ycombinator.com/item?id=46519672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46519672</guid></item></channel></rss>