<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: lelag</title><link>https://news.ycombinator.com/user?id=lelag</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 22:16:09 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=lelag" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by lelag in "The Cost of a Function Call"]]></title><description><![CDATA[
<p>This is all fairly obvious, no?<p>At first, write clean code with functions and don’t obsess over call overhead. Once it works, profile, then optimize where it actually matters.<p>Premature optimization, etc.</p>
]]></description><pubDate>Mon, 09 Feb 2026 09:43:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46943452</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=46943452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46943452</guid></item><item><title><![CDATA[New comment by lelag in "Meta Segment Anything Model Audio"]]></title><description><![CDATA[
<p>Indeed. I've tried to run it locally this but couldn't get it running on my measly gaming-spec workstation.<p>It's seems you need lot's of ram and vram. Reading the issues on github[1], it does not seem many others have had success in using this effectively:<p>- someone with a 96 Gb VRAM RTX 6000 Pro had cuda oom issues<p>- someone somehow made it work on a RTX 4090 somehow, but RTF processing time was 12...<p>- someone with a RTX 5090 managed to use it, but with clips no longer than 20s<p>It seems utility of the model for hobbyist with consumer grade cards will be low.<p>[1]: <a href="https://github.com/facebookresearch/sam-audio/issues/24" rel="nofollow">https://github.com/facebookresearch/sam-audio/issues/24</a></p>
]]></description><pubDate>Fri, 19 Dec 2025 13:55:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46325873</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=46325873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46325873</guid></item><item><title><![CDATA[New comment by lelag in "First recording of a dying human brain shows waves similar to memory flashbacks (2022)"]]></title><description><![CDATA[
<p>Miracle Max gave us a clear definition if I recall, you die when you are "all dead", as long as you are mostly dead, you are slightly alive...<p>I'll let myself out now.</p>
]]></description><pubDate>Mon, 03 Nov 2025 10:22:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45797594</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=45797594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45797594</guid></item><item><title><![CDATA[New comment by lelag in "In a first, Google has released data on how much energy an AI prompt uses"]]></title><description><![CDATA[
<p>In his last blog post, Sam Altman also revealed how much power the average chatgpt query uses, and it's in the same ballpark.<p>> People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.<p><a href="https://blog.samaltman.com/the-gentle-singularity" rel="nofollow">https://blog.samaltman.com/the-gentle-singularity</a></p>
]]></description><pubDate>Thu, 21 Aug 2025 15:06:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44973719</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=44973719</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44973719</guid></item><item><title><![CDATA[New comment by lelag in "Research suggests Big Bang may have taken place inside a black hole"]]></title><description><![CDATA[
<p>Damn, I would not have guessed that Men In Black was actually a documentary...</p>
]]></description><pubDate>Wed, 11 Jun 2025 20:42:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=44251602</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=44251602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44251602</guid></item><item><title><![CDATA[New comment by lelag in "How we made our OCR code more accurate"]]></title><description><![CDATA[
<p>Maybe they want to compile the Apollo Guidance Computer source code...<p><a href="https://www.softwareheritage.org/wp-content/uploads/2019/07/Margaret_Hamilton_-_restoration.jpg" rel="nofollow">https://www.softwareheritage.org/wp-content/uploads/2019/07/...</a></p>
]]></description><pubDate>Thu, 22 May 2025 11:33:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44061001</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=44061001</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44061001</guid></item><item><title><![CDATA[New comment by lelag in "Clojuring the web application stack: Meditation One"]]></title><description><![CDATA[
<p>The metabase "backend" is written in clojure.<p>The web frontend is written in TypeScript/React.</p>
]]></description><pubDate>Wed, 21 May 2025 14:21:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44051738</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=44051738</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44051738</guid></item><item><title><![CDATA[New comment by lelag in "Game preservationists say Switch2 GameKey Cards are disheartening but inevitable"]]></title><description><![CDATA[
<p>Interesting point about PC going digital-only as Nintendo is a fascinating counter-example.<p>While they offer digital downloads on the eShop, their pricing actively discourages it.<p>Case in point: I just bought my kid a new first-party Switch game. Physical copy on Amazon was ~25% cheaper than the identical digital version on Nintendo's own eShop. Even my 9-year-old noted how illogical it seems, the physical version requires manufacturing, shipping, retail markup, yet costs significantly less than the digital bits that have near-zero marginal cost.<p>It strongly suggests Nintendo wants the physical retail channel to thrive, or values the perceived permanence/resale value of cartridges.<p>This context makes the Switch 2 "gamekey" cartridges (physical auth token, digital download) fit their pattern of valuing a physical artifact and retail presence, even if the data delivery shifts.</p>
]]></description><pubDate>Thu, 01 May 2025 14:40:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=43858304</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43858304</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43858304</guid></item><item><title><![CDATA[Mitre to lay off 442 employees after budget cuts]]></title><description><![CDATA[
<p>Article URL: <a href="https://virginiabusiness.com/nova-govcon-firm-mitre-to-lay-off-442-employees-after-doge-cuts-contracts/">https://virginiabusiness.com/nova-govcon-firm-mitre-to-lay-off-442-employees-after-doge-cuts-contracts/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43702469">https://news.ycombinator.com/item?id=43702469</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 16 Apr 2025 07:22:25 +0000</pubDate><link>https://virginiabusiness.com/nova-govcon-firm-mitre-to-lay-off-442-employees-after-doge-cuts-contracts/</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43702469</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43702469</guid></item><item><title><![CDATA[New comment by lelag in "Albert Einstein's theory of relativity in words of four letters or less"]]></title><description><![CDATA[
<p>If I understand it correctly, that's a valid concern but the way structured generation library like outlines[1] work is that they can generate multiple variants of the inference (which they call beam search).<p>One beam could be "This is a way to solv-". With no obvious "good" next token.
Another beam could be "This way is solv-". With "ing" as the obvious next token.<p>It will select the best beam for the output.<p>[1]:<a href="https://github.com/dottxt-ai/outlines">https://github.com/dottxt-ai/outlines</a></p>
]]></description><pubDate>Mon, 14 Apr 2025 09:41:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43679604</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43679604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43679604</guid></item><item><title><![CDATA[New comment by lelag in "Albert Einstein's theory of relativity in words of four letters or less (1999)"]]></title><description><![CDATA[
<p>I can’t say for certain, but I’d guess that writing without the letter “e” is slightly more difficult in French than in English. For one, “e” is a bit more common in French (around 15% of all letters, versus about 12% in English). But more importantly, French grammar adds extra challenges—like gender agreement, where feminine forms often require an “e”, and the frequent use of articles like le and les, which become unusable.<p>That said, I think the most impressive achievement is the English translation of the French novel. Writing an original constrained novel is hard enough, but translating one means you can’t just steer the story wherever you like. You have to preserve the plot, tone, and themes of the original, all while respecting a completely different set of linguistic limitations. That’s a remarkable balancing act.</p>
]]></description><pubDate>Mon, 14 Apr 2025 09:22:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=43679536</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43679536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43679536</guid></item><item><title><![CDATA[New comment by lelag in "Albert Einstein's theory of relativity in words of four letters or less (1999)"]]></title><description><![CDATA[
<p>I was going to point that out.<p>What I will add is that constrained generation is supported by the major inference engine like llama.cpp, vllm and the likes, so what you are describing is actually trivial on locally hosted models, you just have to provide a regex that prevent them to use the letter 'e' in the output.</p>
]]></description><pubDate>Mon, 14 Apr 2025 09:10:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=43679467</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43679467</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43679467</guid></item><item><title><![CDATA[New comment by lelag in "What satellite images reveal about Myanmar's quake [video]"]]></title><description><![CDATA[
<p>More like 2000 tsar bombas if your energy release calculation is correct.</p>
]]></description><pubDate>Sat, 05 Apr 2025 15:09:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43594065</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43594065</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43594065</guid></item><item><title><![CDATA[New comment by lelag in "Gemma3 – The current strongest model that fits on a single GPU"]]></title><description><![CDATA[
<p>OSS model do not have to be local models, and it's not just about privacy, imo.<p>DeepSeek R1 hosting is out of reach for most, but it being open is a game changer if you are a building a business that needs the SoTA capabilities of such a large model, not because you will necessarily host it yourself, but because you can't be locked out of using it.<p>If you build your business on top of OpenAI, and they decide they don't like you, they can shut you down. If you use an open model like R1, you always have the option to self host even if it can be costly, and not be at the mercy of a third party being able to just kill your business by shutting down your access to their service.</p>
]]></description><pubDate>Wed, 12 Mar 2025 11:03:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43341934</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43341934</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43341934</guid></item><item><title><![CDATA[New comment by lelag in "EU to impose counter tariffs on $28 billion of US goods"]]></title><description><![CDATA[
<p>Aren't we already living in the Biff got rich timeline?</p>
]]></description><pubDate>Wed, 12 Mar 2025 10:08:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43341531</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43341531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43341531</guid></item><item><title><![CDATA[New comment by lelag in "Show HN: Open-source, native audio turn detection model"]]></title><description><![CDATA[
<p>Thank for the explanation. I guess it makes some sense, considering many people with no nlp background are using those models now…</p>
]]></description><pubDate>Fri, 07 Mar 2025 18:33:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=43292804</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43292804</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43292804</guid></item><item><title><![CDATA[New comment by lelag in "Show HN: Open-source, native audio turn detection model"]]></title><description><![CDATA[
<p>Yes, weird that they didn't use that term for this project.</p>
]]></description><pubDate>Fri, 07 Mar 2025 16:25:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43291503</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43291503</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43291503</guid></item><item><title><![CDATA[New comment by lelag in "QwQ-32B: Embracing the Power of Reinforcement Learning"]]></title><description><![CDATA[
<p>If that's an issue, there's a workaround using structure generation to force it to output a </thiking> token after some threshold and force it to write the final answer.<p>It's a method used to control thinking token generation showcased in this paper: <a href="https://arxiv.org/abs/2501.19393" rel="nofollow">https://arxiv.org/abs/2501.19393</a></p>
]]></description><pubDate>Thu, 06 Mar 2025 15:51:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43281592</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43281592</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43281592</guid></item><item><title><![CDATA[New comment by lelag in "Italy moves to reverse anti-nuclear stance"]]></title><description><![CDATA[
<p>> And where does the nuclear fuel come from? Russia.<p>Not true at all. Russia is producing 5% of the world Uranium, and they probably use quite a lot of that domestically given they produce 8% of all nuclear power in the world with their own plant.<p>Kazakhstan + Uzbekistan is 50% of the word production. Canada is second and will be happy to start selling to the EU. Namibia and Australia both produce twice as much as Russia.<p>Not to say that supply of natural Uranium is not a concern because you do depends of a small list of countries but we don't need to buy any from Russia.<p>Source:<p><a href="https://en.wikipedia.org/wiki/List_of_countries_by_uranium_production" rel="nofollow">https://en.wikipedia.org/wiki/List_of_countries_by_uranium_p...</a><p><a href="https://en.wikipedia.org/wiki/Nuclear_power_by_country" rel="nofollow">https://en.wikipedia.org/wiki/Nuclear_power_by_country</a></p>
]]></description><pubDate>Tue, 04 Mar 2025 13:18:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=43254141</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43254141</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43254141</guid></item><item><title><![CDATA[New comment by lelag in "The weird afterlife of Xbox Kinect"]]></title><description><![CDATA[
<p>The problem with the original Kinect (v1) is that good tracking software for it was never really written. Most application that support it, just use the original Microsoft SDK to do the motion tracking, and it's just not very good, the main issue is that it always assume that the tracked person is directly facing the camera, and is very bad at dealing with occlusion. The good thing about it, is that it ran in real-time on a potato.<p>In order to get a good result, someone would need to train a good model for HPE that could use the cloud point data directly, but it seems nobody cares about depth sensor anymore, most efforts are going to HPE from regular 2d video (like media pipe holistic model). And given the result you can get with media pipe, openpose and the likes, it's understandable nobody is bothering working with low resolution cloud point anymore for 3D HPE.<p>The only use-case I can think of for a Kinect v1 in 2025, would be robotics if you want a low latency low resolution cloud point for your robot control, but even there I think we are moving to big vision model capable of making sense of regular video feeds.</p>
]]></description><pubDate>Mon, 03 Mar 2025 15:33:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43242712</link><dc:creator>lelag</dc:creator><comments>https://news.ycombinator.com/item?id=43242712</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43242712</guid></item></channel></rss>