<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Flashtoo</title><link>https://news.ycombinator.com/user?id=Flashtoo</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 26 Apr 2026 10:14:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Flashtoo" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Flashtoo in "Acetaminophen vs. ibuprofen"]]></title><description><![CDATA[
<p>There is a danger in chronic abuse resulting in upregulation. Mixing the two at once is no problem for the liver, which is also why patient information leaflets for paracetamol do not contain a warning to avoid alcohol, only about chronic alcohol abuse.<p>Your crappy source is vague in what consumption pattern constitutes a risk and actually cites a better source that supports the idea that acute alcohol consumption reduces paracetamol toxicity. <a href="https://www.biorxiv.org/content/10.1101/2020.07.07.191916v1.full" rel="nofollow">https://www.biorxiv.org/content/10.1101/2020.07.07.191916v1....</a><p>That's a mathematical model, but this relationship between the two is what I was taught in medical school and it is still supported by the science. There's plenty of other sources, I just picked that one because your article cites it. Just search for "paracetamol ethanol" on Google Scholar.</p>
]]></description><pubDate>Wed, 22 Apr 2026 11:44:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47862201</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47862201</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47862201</guid></item><item><title><![CDATA[New comment by Flashtoo in "Acetaminophen vs. ibuprofen"]]></title><description><![CDATA[
<p>This is correct.</p>
]]></description><pubDate>Wed, 22 Apr 2026 07:56:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47860480</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47860480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47860480</guid></item><item><title><![CDATA[New comment by Flashtoo in "The revenge of the data scientist"]]></title><description><![CDATA[
<p>These are good practices to keep in mind when setting up GenAI solutions, but I'm not convinced that this part of the job will allow "data scientist" as a profession to thrive. Here's my pessimistic take.<p>Data scientists were appreciated largely because of their ability to create models that unlock business value. Model creation was a dark magic that you needed strong mathematical skills to perform - or at least that's the image, even if in reality you just slap XGBoost on a problem and call it a day. Data scientists were enablers and value creators.<p>With GenAI, value creation is apparently done by the LLM provider and whoever in your company calls the API, which could really be any engineering team. Coaxing the right behavior out of the LLM is a bit of black magic in itself, but it's not something that requires deep mathematical knowledge. Knowing how gradients are calculated in a decoder-only transformer doesn't really help you make the LLM follow instructions. In fact, all your business stakeholders are constantly prompting chatbots themselves, so even if you provide some expertise here they will just see you as someone doing the same thing they do when they summarize an email.<p>So that leaves the part the OP discusses: evaluation and monitoring. These are not sexy tasks and from the point of view of business stakeholders they are not the primary value add. In fact, they are barriers that get in the way of taking the POC someone slapped together in Copilot (it works!) and putting that solution in production. It's not even strictly necessary if you just want to move fast and break things. Appreciation for this kind of work is most present in large risk-averse companies, but even there it can be tricky to convince management that this is a job that needs to be done by a highly paid statistician with a graduate degree.<p>What's the way forward? Convince management that people with the job title "data scientist" should be allowed to gatekeep building LLM solutions? Maybe I'm overestimating how good the average AI-aware software engineer is at this stuff, but I don't see the professional moat.</p>
]]></description><pubDate>Wed, 01 Apr 2026 22:13:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47607214</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47607214</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47607214</guid></item><item><title><![CDATA[New comment by Flashtoo in "Mr. Chatterbox is a Victorian-era ethically trained model"]]></title><description><![CDATA[
<p>What makes you think the desired effect is to have an LLM that speaks in an old-timey style? The training process is the whole point.</p>
]]></description><pubDate>Tue, 31 Mar 2026 11:50:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47585991</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47585991</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47585991</guid></item><item><title><![CDATA[New comment by Flashtoo in "GPT-5.4"]]></title><description><![CDATA[
<p>> Prompts with more than 272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.</p>
]]></description><pubDate>Thu, 05 Mar 2026 20:12:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47266677</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47266677</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47266677</guid></item><item><title><![CDATA[New comment by Flashtoo in "We automated everything except knowing what's going on"]]></title><description><![CDATA[
<p>Then just post your opinions rather than the text the LLM dreamed around your opinions. Short posts and tweets tend to be well-liked on HN, there is no need to puff it up to a big blog post.</p>
]]></description><pubDate>Tue, 03 Mar 2026 13:58:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47232391</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47232391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47232391</guid></item><item><title><![CDATA[New comment by Flashtoo in "Vibe coded Lovable-hosted app littered with basic flaws exposed 18K users"]]></title><description><![CDATA[
<p>This is true beyond software. It used to be that the proof of the thinking process was in the resulting artifact. No longer can you estimate from the existence of a piece of text and the level of polish behind it that the apparent author has put at least a reasonable amount of thought into it. This applies to comments, blogs, emails, and most troublingly I've seen this happen at my job with things like requirement specs. Now, the veneer of quality makes it much harder to know what is the appropriate amount of skepticism to judge the contents with. And it's too tiring to be maximally skeptical about everything.</p>
]]></description><pubDate>Fri, 27 Feb 2026 19:47:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47184709</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47184709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47184709</guid></item><item><title><![CDATA[New comment by Flashtoo in "This time is different"]]></title><description><![CDATA[
<p>What exactly are you claiming here? That a handful of theorems about the limits of mathematics and provability somehow combine to show that the current LLM-based AI developments will inevitably live up to what is expected of them? And that this is obvious to a select few? That all seems unlikely, to say the least.</p>
]]></description><pubDate>Thu, 26 Feb 2026 20:47:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47171814</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47171814</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47171814</guid></item><item><title><![CDATA[New comment by Flashtoo in "Magical Mushroom – Europe's first industrial-scale mycelium packaging producer"]]></title><description><![CDATA[
<p>Coffee grounds</p>
]]></description><pubDate>Mon, 23 Feb 2026 17:09:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47125178</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=47125178</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47125178</guid></item><item><title><![CDATA[New comment by Flashtoo in "AI's Dial-Up Era"]]></title><description><![CDATA[
<p>> Evolution by natural selection is not a deterministic process so 4 billion years is just one of many possible periods of time needed but not necessarily the longest or the shortest.<p>That's why I say that is an upper bound - we know that it _has_ happened under those circumstances, so the minimum time needed is not more than that. If we reran the simulation it could indeed very well be much faster.<p>I agree that 20 watts can be enough to support intelligence and if we can figure out how to get there, it will take us much less time than a billion years. I also think that on the compute side for developing the AGI we should count all the PhD brains churning away at it right now :)</p>
]]></description><pubDate>Tue, 04 Nov 2025 17:29:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=45813578</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=45813578</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45813578</guid></item><item><title><![CDATA[New comment by Flashtoo in "AI's Dial-Up Era"]]></title><description><![CDATA[
<p>The notion that the brain uses less energy than an incandescent lightbulb and can store less data than YouTube does not mean we have had the compute and data needed to make AGI "for a very long time".<p>The human brain is not a 20-watt computer ("100 watts per day" is not right) that learns from scratch on 2 petabytes of data. State manipulations performed in the brain can be more efficient than what we do in silicon. More importantly, its internal workings are the result of billions of years of evolution, and continue to change over the course of our lives. The learning a human does over its lifetime is assisted greatly by the reality of the physical body and the ability to interact with the real world to the extent that our body allows. Even then, we do not learn from scratch. We go through a curriculum that has been refined over millennia, building on knowledge and skills that were cultivated by our ancestors.<p>An upper bound of compute needed to develop AGI that we can take from the human brain is not 20 watts and 2 petabytes of data, it is 4 billion years of evolution in a big and complex environment at molecular-level fidelity. Finding a tighter upper bound is left as an exercise for the reader.</p>
]]></description><pubDate>Tue, 04 Nov 2025 17:02:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45813235</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=45813235</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45813235</guid></item><item><title><![CDATA[New comment by Flashtoo in "By the end of today, NASA's workforce will be about 10 percent smaller"]]></title><description><![CDATA[
<p>Right after that quote:<p>"The key is ensuring that any future cuts at NASA are not indiscriminate. If and when Jared Isaacman is confirmed by the US Senate as the next NASA administrator, it will be up to him and his team to make the programmatic decisions about which parts of the agency are carrying their weight and which are being carried, which investments carry NASA into the future, and which ones drag it into the past. If these future cuts are smart and position NASA for the future, this could all be worth it. If not, then the beloved agency that dares to explore may never recover."</p>
]]></description><pubDate>Tue, 18 Feb 2025 16:11:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43091214</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=43091214</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43091214</guid></item><item><title><![CDATA[New comment by Flashtoo in "ASML looking to finally get ahead of Moore’s Law"]]></title><description><![CDATA[
<p>It is a tax on an assumed return on assets, determined as a set percentage of wealth. "Vermogensrendementsheffing" means a "tax on return on wealth", not on the wealth itself. In name it is not a wealth tax, but in reality it is, since the assumed return that is taxed has no relation to the true return. This relates to the recent decisions declaring this partially unlawful, see e.g. <a href="https://www.tilburguniversity.edu/magazine/supreme-court-netherlands-rules-box-3-wealth-tax" rel="nofollow">https://www.tilburguniversity.edu/magazine/supreme-court-net...</a></p>
]]></description><pubDate>Fri, 18 Nov 2022 15:16:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=33655891</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=33655891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33655891</guid></item><item><title><![CDATA[New comment by Flashtoo in "ASML looking to finally get ahead of Moore’s Law"]]></title><description><![CDATA[
<p>You pay an annual % tax on the value of your investments less debt as of January 1st. This means you still pay taxes if your assets lose value, too. It's a wealth tax that pretends to be a capital gains tax.</p>
]]></description><pubDate>Fri, 18 Nov 2022 10:11:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=33652681</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=33652681</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33652681</guid></item><item><title><![CDATA[New comment by Flashtoo in "Put down devices, let your mind wander, study suggests"]]></title><description><![CDATA[
<p>Are there any specific podcasts you would recommend?</p>
]]></description><pubDate>Mon, 01 Aug 2022 05:58:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=32302575</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=32302575</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32302575</guid></item><item><title><![CDATA[New comment by Flashtoo in "The inventor of ibuprofen tested the drug on his own hangover"]]></title><description><![CDATA[
<p>Paracetamol and alcohol is actually not a dangerous combination at all as far as the liver is concerned. That is why there is no warning against combining the two in the information leaflet that comes with it. Paracetamol is not toxic, but its intermediate metabolite NAPQI is. The enzyme that converts paracetamol into NAPQI is the same that breaks down alcohol, and it has a higher affinity for alcohol meaning that it will be too busy working on the alcohol to turn the paracetamol into toxic NAPQI.<p>Long-term alcohol abusers will develop more of this enzyme, so they are more likely to get liver damage from paracetamol though.</p>
]]></description><pubDate>Fri, 22 Jul 2022 17:08:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=32194416</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=32194416</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32194416</guid></item><item><title><![CDATA[New comment by Flashtoo in "Leakage and the reproducibility crisis in ML-based science"]]></title><description><![CDATA[
<p>When you evaluate an ML approach, you should use one part of the data to train your model and a completely separate part to evaluate it. Otherwise, your model can just memorize parts of the data (or overfit in some other way), resulting in artificially high performance. Data leakage is when there is a problem in this separation and you somehow use information about the evaluation dataset in the model training process. The table in the article lists various examples. The simplest would be to just not have a separate evaluation set. A more subtle one is if you normalize your input data based on both the training and evaluation sets; this way the normalization will be better suited to the evaluation set than it should be if you had no knowledge of it, resulting in artificially high performance.</p>
]]></description><pubDate>Sat, 16 Jul 2022 08:06:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=32116108</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=32116108</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32116108</guid></item><item><title><![CDATA[New comment by Flashtoo in "The world needs a non-profit search engine"]]></title><description><![CDATA[
<p>E.g. on Google if you search for "how to tie a tie", a little info box may pop up with step by step instructions. This content is taken from some website, but that website gets no page hits or ad revenue. Instead, Google gets to serve ads on the search engine results page.<p>(I don't know if this happens for this specific example, but Google does this for some searches)</p>
]]></description><pubDate>Sun, 10 Jul 2022 09:35:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=32043162</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=32043162</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32043162</guid></item><item><title><![CDATA[New comment by Flashtoo in "The most livable cities"]]></title><description><![CDATA[
<p>Amsterdam is #9 in the report (scoring 0.1 lower than Toronto), but not mentioned in the article for some reason.</p>
]]></description><pubDate>Sat, 25 Jun 2022 08:39:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=31873547</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=31873547</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31873547</guid></item><item><title><![CDATA[New comment by Flashtoo in "A lightning bolt would be worth only about a nickel (2015)"]]></title><description><![CDATA[
<p>This isn't exactly what you are talking about, but this art/concept for self-powered student housing came to mind: <a href="https://www.humanpowerplant.be/human_power_plant/human-powered-student-building-plans.html" rel="nofollow">https://www.humanpowerplant.be/human_power_plant/human-power...</a></p>
]]></description><pubDate>Thu, 23 Jun 2022 07:19:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=31846078</link><dc:creator>Flashtoo</dc:creator><comments>https://news.ycombinator.com/item?id=31846078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31846078</guid></item></channel></rss>