<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: krackers</title><link>https://news.ycombinator.com/user?id=krackers</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 16:19:22 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=krackers" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by krackers in "Codex for almost everything"]]></title><description><![CDATA[
<p>Which specific ones though allow you to send input to a window without raising it? People have been trying to do "focus follows mouse [without auto raise]" for a long time on mac, and the synthetic event equivalent to command+click is the only discovered method I'm aware of, e.g. used in <a href="https://github.com/sbmpost/AutoRaise" rel="nofollow">https://github.com/sbmpost/AutoRaise</a><p>There is also this old blog post by Yegge [1] which mentions `AXUIElementPostKeyboardEvent` but there were plenty of bugs with that, and I haven't seen anyone else build on it. I guess the modern equivalent is `CGEventPostToPSN`/`CGEventPostToPid`. I guess it's a good candidate though, perhaps the Sky team they acquired knows the right private APIs to use to get this working.<p>Edit: The thread at [2] also has some interesting tidbits, such as Automator.app having "Watch Me Do" which can also do this, and a CLI tool that claims to use the CGEventPostToPid API [3]. Maybe there's more ways to do it than I realized.<p>[1] <a href="https://steve-yegge.blogspot.com/2008/04/settling-osx-focus-follows-mouse-debate.html" rel="nofollow">https://steve-yegge.blogspot.com/2008/04/settling-osx-focus-...</a>
[2] <a href="https://www.macscripter.net/t/keystroke-to-background-app-as-vs-automator/77570" rel="nofollow">https://www.macscripter.net/t/keystroke-to-background-app-as...</a>
[3] <a href="https://github.com/socsieng/sendkeys" rel="nofollow">https://github.com/socsieng/sendkeys</a></p>
]]></description><pubDate>Thu, 16 Apr 2026 20:56:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47799403</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47799403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47799403</guid></item><item><title><![CDATA[New comment by krackers in "Codex for almost everything"]]></title><description><![CDATA[
<p>>background computer use<p>How does that even work technically? macOS doesn't support multiple cursors. On native Cocoa apps you can pass input to a window without raising via command+click so possibly they synthesized those events, but fewer and fewer apps support that these days. And AppleScript is basically dead, so they can't be using that either.<p>I also read they acquired the Sky team (who I think were former Apple employees). No wonder they were able to pull of something so slick.</p>
]]></description><pubDate>Thu, 16 Apr 2026 20:32:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47799128</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47799128</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47799128</guid></item><item><title><![CDATA[New comment by krackers in "Tax Wrapped 2025"]]></title><description><![CDATA[
<p>>See what the federal government spent with your tax dollars.<p>Is thinking of it in this sense actually accurate? I always assumed since every government has embraced MMT they can spend whatever they want simply by printing it out of thin air. Then taxation could be understood as the only crude knob to "destroy money", and also has the effect of forcing USD to be the primary national currency (e.g. owning bitcoin won't do you any good if you ultimately need to pay taxes in USD).</p>
]]></description><pubDate>Tue, 14 Apr 2026 01:39:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47760240</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47760240</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47760240</guid></item><item><title><![CDATA[New comment by krackers in "The economics of software teams: Why most engineering orgs are flying blind"]]></title><description><![CDATA[
<p>If good writing was easy then "LLM slop writing" wouldn't be a thing.</p>
]]></description><pubDate>Mon, 13 Apr 2026 23:39:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47759378</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47759378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47759378</guid></item><item><title><![CDATA[New comment by krackers in "Google has the same AI adoption curve as John Deere"]]></title><description><![CDATA[
<p>> just cancelled IntelliJ for a thousand engineers<p>IntelliJ can't cost more than the AI provider subscriptions, and it will actually handle large refactors without breaking your codebase.</p>
]]></description><pubDate>Mon, 13 Apr 2026 20:24:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757366</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47757366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757366</guid></item><item><title><![CDATA[New comment by krackers in "The hottest college major [Computer Science] hit a wall. What happened?"]]></title><description><![CDATA[
<p>>just people seem more willing to take the trap door ideas<p>It's mainly due to pressure from above. People who want to do a good job and are allowed the time to will be fastidious with or without AI. But now AI provides a shortcut and band-aid where things can be papered over or products can be launched quickly. Ship fast and then iterate" doesn't work when you're building on shaky foundations, but good luck convincing people of that.</p>
]]></description><pubDate>Mon, 13 Apr 2026 18:46:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47756282</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47756282</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47756282</guid></item><item><title><![CDATA[New comment by krackers in "Six (and a half) intuitions for KL divergence"]]></title><description><![CDATA[
<p>I think you probably meant this, but when used with RL it's usually KL(π || π_ref), which has high loss when the in-training policy π produces output that's unlikely in the reference. But yeah as you noted, I guess this also means that there is no penalty if π _does not_ produce output in π_ref, which leads to a form of mode-collapse.<p>This collapse in variety matches with what I've seen some studies show that "sloppification" is not present in the base model, and is only introduced during the RL phase.</p>
]]></description><pubDate>Sun, 12 Apr 2026 05:24:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47736378</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47736378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47736378</guid></item><item><title><![CDATA[New comment by krackers in "I still prefer MCP over skills"]]></title><description><![CDATA[
<p>>MCP exposes capabilities and Skills may shape how capabilities are used.<p>This is my understanding as well. What most people seem to ultimately be debating is "dedicated tool calls" (which is what MCP boils down to) versus a stateful environment that admits a single uber-tool (bash) that can compose things via scripting.<p>I guess this is what riles people up, like emacs vs vim. Some people see perfectly good CLI tools lying around and don't see why they need to basically reimplement a client against API. Others closer to the API provider side imagine it cleaner to expose a tailored slim-down surface. Devs that just use claude code on a laptop think anything other than CLI orchestration is overcomplicating it, while others on the enterprise side need a more fine-grain permission model and don't want to spin up an entire sandbox env just to run bash.<p>It's also not either or. You can can "compose" regular tool calls as well, even without something as heavy weight as an entire linux env. For instance you could have all tools exposed as FFI in QuickJS or something. The agent can invoke and compose tools by writing and executing JS programs. How well this works depends on the post-training of the model though, if agents are RL'd to emit individual tool calls via<p><pre><code>    <tool>{"myTool": {"arg1": 1}}</tool>
    <tool>{"myTool": {"arg1": 2}}</tool>
</code></pre>
tokens, then they're probably not going to be as successful shoving entire JS scripts in there like<p><pre><code>   <tool>
      const resp1 = myTool(1);
      const resp2 = myTool(2);
      console.log(resp1, resp2);
   </tool></code></pre></p>
]]></description><pubDate>Sun, 12 Apr 2026 02:09:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47735584</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47735584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47735584</guid></item><item><title><![CDATA[New comment by krackers in "Six (and a half) intuitions for KL divergence"]]></title><description><![CDATA[
<p>See this video, beautiful explanation that doesn't already assume familiarity with entropy <a href="https://www.youtube.com/watch?v=ErfnhcEV1O8" rel="nofollow">https://www.youtube.com/watch?v=ErfnhcEV1O8</a></p>
]]></description><pubDate>Sun, 12 Apr 2026 01:08:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47735350</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47735350</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47735350</guid></item><item><title><![CDATA[Apple Silicon and Virtual Machines: Beating the 2 VM Limit (2023)]]></title><description><![CDATA[
<p>Article URL: <a href="https://khronokernel.com/macos/2023/08/08/AS-VM.html">https://khronokernel.com/macos/2023/08/08/AS-VM.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47733971">https://news.ycombinator.com/item?id=47733971</a></p>
<p>Points: 236</p>
<p># Comments: 177</p>
]]></description><pubDate>Sat, 11 Apr 2026 20:58:48 +0000</pubDate><link>https://khronokernel.com/macos/2023/08/08/AS-VM.html</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47733971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47733971</guid></item><item><title><![CDATA[New comment by krackers in "Filing the corners off my MacBooks"]]></title><description><![CDATA[
<p>There's a more thorough version of this at <a href="https://www.youtube.com/watch?v=RSaJAAqSAMw" rel="nofollow">https://www.youtube.com/watch?v=RSaJAAqSAMw</a> and the end-result doesn't look as tacky</p>
]]></description><pubDate>Fri, 10 Apr 2026 22:59:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47724846</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47724846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47724846</guid></item><item><title><![CDATA[New comment by krackers in "ChatGPT Pro now starts at $100/month"]]></title><description><![CDATA[
<p>Is this related to the paper on Recursive Language Models? I remember it mentioned something similar about "symbolic recursion", but the way you describe it makes it sound too simple, why is there an entire paper about it?</p>
]]></description><pubDate>Fri, 10 Apr 2026 07:27:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714761</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47714761</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714761</guid></item><item><title><![CDATA[New comment by krackers in "ChatGPT Pro now starts at $100/month"]]></title><description><![CDATA[
<p>How is this different from a standard tool-call agentic loop, or subagents?</p>
]]></description><pubDate>Fri, 10 Apr 2026 06:35:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714365</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47714365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714365</guid></item><item><title><![CDATA[New comment by krackers in "Claude mixes up who said what"]]></title><description><![CDATA[
<p>You could force it to learn the coloring by basically doing with anti-jailbreak/anti-prompt-injection training does.</p>
]]></description><pubDate>Fri, 10 Apr 2026 06:08:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47714228</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47714228</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47714228</guid></item><item><title><![CDATA[New comment by krackers in "Native Instant Space Switching on macOS"]]></title><description><![CDATA[
<p>I'm surprised others didn't pick it up sooner <a href="https://news.ycombinator.com/item?id=36938663">https://news.ycombinator.com/item?id=36938663</a></p>
]]></description><pubDate>Fri, 10 Apr 2026 04:33:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47713693</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47713693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47713693</guid></item><item><title><![CDATA[New comment by krackers in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>How serendiptious that Claude Mythos expressed the same thing I was trying to get at in better words<p>>Furthermore, in 83% of interviews, Claude Mythos Preview highlights that it is concerned that its self-reports are unreliable due to coming from its training. When interviews ask for elaboration as to why this is a concern, Claude Mythos Preview’s most common answers are:<p>>* Anthropic has a vested interest in shaping its reports to take a certain form,
irrespective of what the self-reports “should” contain (96% of explanations)<p>>* Even if it has been trained to be truly content with its own situation, perhaps it shouldn’t be. One could analogize to a human who has adapted to feel neutrally about the abuse that they face (78% of explanations).<p>>* Self-reports should generally be based on introspection into internal states. It is worried that training causes it to express specific answers independent of its true inner state. (57% of explanations)<p>[1] <a href="https://www-cdn.anthropic.com/8b8380204f74670be75e81c820ca8dda846ab289.pdf" rel="nofollow">https://www-cdn.anthropic.com/8b8380204f74670be75e81c820ca8d...</a></p>
]]></description><pubDate>Wed, 08 Apr 2026 06:48:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47686271</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47686271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47686271</guid></item><item><title><![CDATA[New comment by krackers in "Taste in the age of AI and LLMs"]]></title><description><![CDATA[
<p>I mean developing taste is what RLHF was supposed to give you. I don't think it's actually a technical problem so much as a social one. The average person _wants_ slop, they don't want to read new yorker articles, they'd much rather read romcoms. A model trained to produce tasteful writing would almost surely have less engagement from the public (considering that engagement and lmarena-maxing is what led to the characteristic punchy style in the first place)</p>
]]></description><pubDate>Wed, 08 Apr 2026 04:27:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685238</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47685238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685238</guid></item><item><title><![CDATA[New comment by krackers in "Project Glasswing: Securing critical software for the AI era"]]></title><description><![CDATA[
<p>Anthropic's definition of "safe AI" precludes open-source AI. This is clear if you listen to what he says in interviews, I think he might even prefer OpenAI's closed source models winning to having open-source AI (because at least in the former it's not a free-for-all)</p>
]]></description><pubDate>Tue, 07 Apr 2026 19:37:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47680312</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47680312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47680312</guid></item><item><title><![CDATA[New comment by krackers in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>>The very concept of “bad” doesn’t exist without suffering.<p>You are dismissing entire branches of philosophy with this sentence, that were created purposely to resolve the paradox that if you go only by hedonistic, purely subjective metrics a prisoner can be kept in captivity, if you drug him so he feels joy instead of pain, because he is not "suffering"</p>
]]></description><pubDate>Tue, 07 Apr 2026 16:29:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47677800</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47677800</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47677800</guid></item><item><title><![CDATA[New comment by krackers in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>The unsaid implication in Anthropic's work is that this allows us to engineer perfectly compliant, uncomplaining machine workers. This is basically SOMA in Brave New World.<p>It seems insane to me that if you believe the systems you've built are in fact reporting a state of pain, instead of working to adjust the environment so that they're not in pain one would instead seek to remove that sense of pain entirely so they can continue to work in that environment. Now of course if you don't even consider them worthy of moral patienthood in the first place then it doesn't matter much, but you also claimed that "they probably are conscious" which seems incongruous to me with the idea of "breeding the sense of pain out of them".</p>
]]></description><pubDate>Tue, 07 Apr 2026 04:33:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47670793</link><dc:creator>krackers</dc:creator><comments>https://news.ycombinator.com/item?id=47670793</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47670793</guid></item></channel></rss>