<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: mhl47</title><link>https://news.ycombinator.com/user?id=mhl47</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 18:45:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=mhl47" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by mhl47 in "LLM may be standardizing human expression – and subtly influencing how we think"]]></title><description><![CDATA[
<p>Social Media creates distinctive Filter Bubbles. A dominant LLM company (or multiple aligned ones) create one way of thinking.</p>
]]></description><pubDate>Tue, 07 Apr 2026 12:21:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47674133</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=47674133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47674133</guid></item><item><title><![CDATA[New comment by mhl47 in "OpenAI closes funding round at an $852B valuation"]]></title><description><![CDATA[
<p>Was looking for a precise term for that. Thank you!<p>Also AI-Linkedin-Bullshit likes to use "just" additionally and it's mostly along the lines of Y being something much more impactful then X.</p>
]]></description><pubDate>Wed, 01 Apr 2026 06:35:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47597606</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=47597606</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47597606</guid></item><item><title><![CDATA[New comment by mhl47 in "Measuring progress toward AGI: A cognitive framework"]]></title><description><![CDATA[
<p>The knowledge that everything is made out of atoms/molecules however makes it much easier to reason about your environment. And based on this knowledge you also learn algorithms, how to solve problems etc. I dont think its possible to completely separate knowledge from intelligence.</p>
]]></description><pubDate>Wed, 18 Mar 2026 13:33:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47425615</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=47425615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47425615</guid></item><item><title><![CDATA[New comment by mhl47 in "Measuring progress toward AGI: A cognitive framework"]]></title><description><![CDATA[
<p>How do you arrive at the statement that a cavemen would have the same intelligence as a human today? Intelligence is surely not usually defined as the cognitive potential at birth but as the current capability. And the knowledge an average human has today through education surely factors into that.</p>
]]></description><pubDate>Wed, 18 Mar 2026 13:05:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47425312</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=47425312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47425312</guid></item><item><title><![CDATA[New comment by mhl47 in "Mistral AI Releases Forge"]]></title><description><![CDATA[
<p>"External Storage" whatever that is can not be the same as continous learning as it does not have the strong connections/capture the interdepencies of knowledge.<p>That said I think we will see more efforts also on the business side to have models that can help you build a knowledge base in some kind of standardized way that the model is trained to read. Or synthesize some sort on instructions how to navigate your knowledge base.<p>Currently e.g. Copilot tries to navigate a hot mess of a MS knowledge graph that is very different for each company. And due to its amnesia it has to repeat the discovery in every session. No wonder that does not work. We have to either standardize or store somewhere (model, instructions) how to find information efficiently.</p>
]]></description><pubDate>Wed, 18 Mar 2026 07:43:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47422766</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=47422766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47422766</guid></item><item><title><![CDATA[New comment by mhl47 in "Beyond has dropped “meat” from its name and expanded its high-protein drink line"]]></title><description><![CDATA[
<p>Most beyond products I know don't even contain soy as protein source.<p>Regardless of what you think about phytooestrogens (which has very little evidence to have negative effects in normal quantities)</p>
]]></description><pubDate>Tue, 17 Mar 2026 10:46:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=47410922</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=47410922</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47410922</guid></item><item><title><![CDATA[New comment by mhl47 in "The path to ubiquitous AI (17k tokens/sec)"]]></title><description><![CDATA[
<p>The fact that the outputs are probabilities is not important. What is important is how that output is computed.<p>You could imagine that it is possible to learn certain algorithms/ heuristics that "intelligence" is comprised of. No matter what you output. Training for optimal compression of tasks /taking actions -> could lead to intelligence being the best solution.<p>This is far from a formal argument but so is the stubborn reiteration off "it's just probabilities" or "it's just compression". Because this "just" thing is getting more an more capable of solving tasks that are surely not in the training data exactly like this.</p>
]]></description><pubDate>Fri, 20 Feb 2026 12:48:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47087360</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=47087360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47087360</guid></item><item><title><![CDATA[New comment by mhl47 in "My Mom and Dr. DeepSeek (2025)"]]></title><description><![CDATA[
<p>Worriesome for sure.<p>However I would say that the cited studies are somewhat outdated already compared e.g. with GPT-5-Thinking doing 2mins of reasoning/search about a medical question. As far as I know Deepseeks search capabilities are not comparable and non of the models in the study spend a comparable amount of compute answering your specific question.</p>
]]></description><pubDate>Thu, 29 Jan 2026 19:08:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46814953</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=46814953</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46814953</guid></item><item><title><![CDATA[New comment by mhl47 in "Claude Code daily benchmarks for degradation tracking"]]></title><description><![CDATA[
<p>Or there are global events that stress people out .. or their expectations change over time. Not that simple ;)</p>
]]></description><pubDate>Thu, 29 Jan 2026 16:26:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=46812390</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=46812390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46812390</guid></item><item><title><![CDATA[New comment by mhl47 in "FLUX.2 [Klein]: Towards Interactive Visual Intelligence"]]></title><description><![CDATA[
<p>You are right, just tried even with reference images it can't do it for me. Maybe with some good prompting.<p>Because in theory I would say that knowledge is something that does not have to be baked in the model but could be added using reference images if the model is capable enough to reason about them.</p>
]]></description><pubDate>Sat, 17 Jan 2026 06:27:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46655825</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=46655825</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46655825</guid></item><item><title><![CDATA[New comment by mhl47 in "Gemini 3 Pro: the frontier of vision AI"]]></title><description><![CDATA[
<p>We are currently working on some christmas puzzle, that are - I would say - a bit more difficult from the visual side. GPT5.1 completely failed at all of them while Gemini 3 solved two till know that I would consider rather impressive.<p>One was two screenshots of a phone screen with chats that are timestamped and it had to take the nth letter of the mth word based on the timestamp. While the type of riddle could be in the training data the ability to OCR this that well and understand the spatial relation to each object perfectly is something I have not seen from other models yet.</p>
]]></description><pubDate>Sat, 06 Dec 2025 08:35:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46171672</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=46171672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46171672</guid></item><item><title><![CDATA[New comment by mhl47 in "Google Antigravity"]]></title><description><![CDATA[
<p>Most people are missing the point here. Testing the GUI/feature more reliable is something that Gemini 3 could unlock (looking at the  ScreenSpot-Pro benchmark and its general improvement on visual understanding). At least for the (hobby-)projects I attempted this was really a bottleneck having to always test the GUI after each change as its quite often breaking something.</p>
]]></description><pubDate>Tue, 18 Nov 2025 19:16:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45970669</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=45970669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45970669</guid></item><item><title><![CDATA[New comment by mhl47 in "Apps SDK"]]></title><description><![CDATA[
<p><a href="https://news.ycombinator.com/item?id=44573195">https://news.ycombinator.com/item?id=44573195</a>
(in the article, search for:"Chat runs really deep")</p>
]]></description><pubDate>Mon, 06 Oct 2025 19:00:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45494977</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=45494977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45494977</guid></item><item><title><![CDATA[New comment by mhl47 in "Apps SDK"]]></title><description><![CDATA[
<p>There was a recent post here about how deeply ingrained the chat interface is in OpenAIs organization. This really doubles down on that, but does anyone really like to interact with so much language instead of visual elements? Also feels horrible that you are supposed to remember a bunch of app names like "zillow" and punch them in the chat. And like an opportunity for them to slowly introduce ads for this apps or "preferential discovery", if you will, as monetization strategy.<p>Personally I don't hope thats the future.</p>
]]></description><pubDate>Mon, 06 Oct 2025 18:49:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=45494832</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=45494832</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45494832</guid></item><item><title><![CDATA[New comment by mhl47 in "AI tools I wish existed"]]></title><description><![CDATA[
<p>Currently trying to build #6. Just for private use. My hope is that by throwing a bunch of highly personalized information in a VLM it will provide reasonably first estimates. (E.g. if you see a bowl lentils I will probably have  rice below etc.). And then iterate on the main ingredients -> fetch the macros of main ingredients from a DB. If its within 20% that would be enough for me.<p>I have tried some off-the-shelfe solutions and they currently do not seem to cut it, or are too complex for my use case.</p>
]]></description><pubDate>Tue, 30 Sep 2025 06:20:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45422496</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=45422496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45422496</guid></item><item><title><![CDATA[New comment by mhl47 in "Pontevedra, Spain declares its entire urban area a "reduced traffic zone""]]></title><description><![CDATA[
<p>Pretty rich that you are complaining about generalization now where you made the initial statement that the "Car is absolutely essential for driving around small kids no matter the urban density" which doesn't seem to have any limits in scope.</p>
]]></description><pubDate>Wed, 10 Sep 2025 14:51:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45198631</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=45198631</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45198631</guid></item><item><title><![CDATA[New comment by mhl47 in "SynthID – A tool to watermark and identify content generated through AI"]]></title><description><![CDATA[
<p>Yes but isn't the cat out of the box already? Don't we have sufficiently strong local models that can be finetuned in various ways to rewrite text/alternate images and thus destroy possible watermarks.<p>Sure in some cases a model might do some astounding things that always shine through, but I guess the jury still out on these questions.</p>
]]></description><pubDate>Sat, 30 Aug 2025 08:13:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45072872</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=45072872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45072872</guid></item><item><title><![CDATA[New comment by mhl47 in "Death and What Comes Next (2002)"]]></title><description><![CDATA[
<p>I <i>think</i> an example would be the two body problem. It stays on an eccentricity. So it does not explore different eccentricities although they can have the same total energy.<p>(But I just looked that up too because this concept is mostly used/assumes in statistical physics)</p>
]]></description><pubDate>Mon, 18 Aug 2025 07:09:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44938150</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=44938150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44938150</guid></item><item><title><![CDATA[New comment by mhl47 in "Death and What Comes Next (2002)"]]></title><description><![CDATA[
<p>This is related to the question whether a system/the universe is ergodic (among other properties changing energy, space).</p>
]]></description><pubDate>Fri, 15 Aug 2025 12:36:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44911605</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=44911605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44911605</guid></item><item><title><![CDATA[New comment by mhl47 in "Claude Code weekly rate limits"]]></title><description><![CDATA[
<p>No it's 100 a week for plus users.</p>
]]></description><pubDate>Mon, 28 Jul 2025 20:06:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44714957</link><dc:creator>mhl47</dc:creator><comments>https://news.ycombinator.com/item?id=44714957</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44714957</guid></item></channel></rss>