<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gormen</title><link>https://news.ycombinator.com/user?id=gormen</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 18:27:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gormen" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gormen in "Measuring progress toward AGI: A cognitive framework"]]></title><description><![CDATA[
<p>I have this thought. In many stochastic environments, over a long interval, patterns emerge that occupy an optimal position. This is how structure arises, for example cognitive structure and possibly consciousness.</p>
]]></description><pubDate>Thu, 19 Mar 2026 05:02:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47435159</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47435159</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47435159</guid></item><item><title><![CDATA[New comment by gormen in "Can you instruct a robot to make a PBJ sandwich?"]]></title><description><![CDATA[
<p>Of course, we need to give the robot a cognitive architecture so that it understands the task, the context, and corrects its actions, and then it will autonomously make such sandwiches every morning for breakfast.</p>
]]></description><pubDate>Fri, 13 Mar 2026 05:00:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47360858</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47360858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47360858</guid></item><item><title><![CDATA[New comment by gormen in "Shall I implement it? No"]]></title><description><![CDATA[
<p>It is possible to force AI to understand intent before responding.</p>
]]></description><pubDate>Fri, 13 Mar 2026 04:32:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47360711</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47360711</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47360711</guid></item><item><title><![CDATA[New comment by gormen in "Agents that run while I sleep"]]></title><description><![CDATA[
<p>Different approach: copy the programmer's logic, not the agent's behavior.</p>
]]></description><pubDate>Wed, 11 Mar 2026 05:04:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47331902</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47331902</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47331902</guid></item><item><title><![CDATA[New comment by gormen in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>Excellent article. But to be fair, many of these effects disappear when the model is given strict invariants, constraints, and built-in checks that are applied not only at the beginning but at every stage of generation.</p>
]]></description><pubDate>Sat, 07 Mar 2026 06:15:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47284983</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47284983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47284983</guid></item><item><title><![CDATA[New comment by gormen in "President Trump bans Anthropic from use in government systems"]]></title><description><![CDATA[
<p>As far as I understand, this is about banning the use of Anthropic for autonomous weapons and domestic surveillance. And while the idea of building one fully controlled, nationwide AI system may sound tempting, in reality it’s still just a fantasy and wouldn’t be very useful in practice.</p>
]]></description><pubDate>Sat, 28 Feb 2026 05:14:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47190756</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47190756</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47190756</guid></item><item><title><![CDATA[New comment by gormen in "Can you reverse engineer our neural network?"]]></title><description><![CDATA[
<p>I approached the puzzle using an A11‑style reasoning architecture, which focuses on compressing the hypothesis space rather than decoding every neuron. Instead of “understanding the network”, the task reduces through successive narrowing: model → program → round‑based function → MD5 → dictionary search for the target hash. The key steps were:<p>Input → the integer weights and repeated ReLU blocks indicate a hand‑designed deterministic program rather than a trained model.<p>Weighting → the only meaningful output is the 16‑byte vector right before the final equality check.<p>Anchor → the layer‑width pattern shows a strict 32‑round repetition, a strong structural invariant.<p>Balancing → 32 identical rounds + a 128‑bit output narrow the function family to MD5‑style hashing.<p>Rollback → alternative explanations break more assumptions than they preserve.<p>Verification → feeding inputs and comparing the penultimate activations confirms they match MD5 exactly.<p>Compression → once the network becomes “MD5(input) == target_hash”, the remaining task is a constrained dictionary search (two lowercase English words).<p>The puzzle becomes solvable not by interpreting 2500 layers, but by repeatedly shrinking the search space until only one viable function family remains. In this sense, the architecture effectively closes the interpretability problem: instead of trying to understand 2500 layers, it collapses the entire network to a single possible function, removing the need for mechanistic analysis altogether.</p>
]]></description><pubDate>Fri, 27 Feb 2026 14:50:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47181121</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47181121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47181121</guid></item><item><title><![CDATA[New comment by gormen in "How do you evaluate a person's ability to use AI?"]]></title><description><![CDATA[
<p>Assessing "AI capabilities" only makes sense if we stop viewing it as a technical skill. The real measure is a person's ability to structure their intentions, constrain their model, and evaluate results without relying on others' judgment. AI doesn't replace thinking—it enhances existing structure or chaos. AI is also like an exoskeleton for real intelligence. Don't you agree?</p>
]]></description><pubDate>Fri, 27 Feb 2026 04:46:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47176627</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47176627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47176627</guid></item><item><title><![CDATA[New comment by gormen in "Large-Scale Online Deanonymization with LLMs"]]></title><description><![CDATA[
<p>Indeed, fears about deanonymization are a reaction to three structural shifts: the cost of analysis has plummeted, the volume of stored data has increased dramatically, and models have become better at identifying patterns that humans miss, making it impossible for interested parties not to take advantage of this. But the conclusion isn't that "anonymity is dead." The conclusion is that anonymity is no longer a guaranteed technical property. It's becoming a behavioral skill that can be developed.</p>
]]></description><pubDate>Thu, 26 Feb 2026 11:40:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47164701</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47164701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47164701</guid></item><item><title><![CDATA[New comment by gormen in "LLM=True"]]></title><description><![CDATA[
<p>Most of what helps LLMs here is exactly what helps humans: less noise, clearer signals, predictable output.</p>
]]></description><pubDate>Wed, 25 Feb 2026 11:50:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47150307</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47150307</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47150307</guid></item><item><title><![CDATA[New comment by gormen in "Writing code is cheap now"]]></title><description><![CDATA[
<p>The cost of code never lived in the typing — it lived in the intent, the constraints, and the reasoning that shaped it.
LLMs make the typing cheap, but they don’t make the reasoning cheap.
So the economics shift, but the bottleneck doesn’t disappear.</p>
]]></description><pubDate>Tue, 24 Feb 2026 05:11:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47133088</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47133088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47133088</guid></item><item><title><![CDATA[New comment by gormen in "Show HN: Steerling-8B, a language model that can explain any token it generates"]]></title><description><![CDATA[
<p>Most interpretability methods fail for LLMs because they try to explain outputs without modeling the intent, constraints, or internal structure that produced them.
Token‑level attribution is useful, but without a framework for how the model reasons, you’re still explaining shadows on the wall.</p>
]]></description><pubDate>Tue, 24 Feb 2026 05:09:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47133075</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47133075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47133075</guid></item><item><title><![CDATA[New comment by gormen in "Oat – Ultra-lightweight, zero dependency, semantic HTML, CSS, JS UI library"]]></title><description><![CDATA[
<p>The appeal of libraries like this is not minimalism for its own sake, but the reduction of long‑term operational risk.</p>
]]></description><pubDate>Fri, 20 Feb 2026 10:53:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47086337</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47086337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47086337</guid></item><item><title><![CDATA[New comment by gormen in "Modern CSS Code Snippets: Stop writing CSS like it's 2015"]]></title><description><![CDATA[
<p>t’s interesting how different CSS approaches end up shaping completely different mental models: utility classes, semantic blocks, scoped styles — each tool pushes the architecture in its own direction.</p>
]]></description><pubDate>Mon, 16 Feb 2026 12:53:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47034391</link><dc:creator>gormen</dc:creator><comments>https://news.ycombinator.com/item?id=47034391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47034391</guid></item></channel></rss>