<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: asabla</title><link>https://news.ycombinator.com/user?id=asabla</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 23:21:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=asabla" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by asabla in "Agent Safehouse – macOS-native sandboxing for local agents"]]></title><description><![CDATA[
<p>Yee I gotcha.<p>Did a migration myself last week from using playwright mcp towards playwright-cli instead. Which has been playing much nicer so far. I guess you would run into the same issues you've already mentioned about running chrome headless in one of these sandboxes.<p>I'll for sure keep an eye out for updates.<p>Kudos to the project!</p>
]]></description><pubDate>Sun, 08 Mar 2026 22:26:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47302276</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=47302276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47302276</guid></item><item><title><![CDATA[New comment by asabla in "Agent Safehouse – macOS-native sandboxing for local agents"]]></title><description><![CDATA[
<p>Oh woah!<p>I've been trying to get microsandbox to play nicely. But this is much closer to what I actually need.<p>I glimpsed through the site and the script. But couldn't really see any obvious gotchas.<p>Any you've found so far which hasn't been documented yet?</p>
]]></description><pubDate>Sun, 08 Mar 2026 21:41:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47301823</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=47301823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47301823</guid></item><item><title><![CDATA[New comment by asabla in "GPT-5.4"]]></title><description><![CDATA[
<p>I really don't have any numbers to back this up. But it feels like the sweet spot is around ~500k context size. Anything larger then that, you usually have scoping issues, trying to do too much at the same time, or having having issues with the quality of what's in the context at all.<p>For me, I would say speed (not just time to first token, but a complete generation) is more important then going for a larger context size.</p>
]]></description><pubDate>Thu, 05 Mar 2026 21:43:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47267713</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=47267713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47267713</guid></item><item><title><![CDATA[New comment by asabla in "Running Claude Code dangerously (safely)"]]></title><description><![CDATA[
<p>Very nice!<p>I've been experimenting with a similar setup. And I'll probably implement some of the things you've been doing.<p>For the proxy part I've been running  <a href="https://www.mitmproxy.org/" rel="nofollow">https://www.mitmproxy.org/</a> It's not fully working for all workflows yet. But it's getting close</p>
]]></description><pubDate>Tue, 20 Jan 2026 22:25:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46698527</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=46698527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46698527</guid></item><item><title><![CDATA[New comment by asabla in "A guide to local coding models"]]></title><description><![CDATA[
<p>From my point of view, you're either choosing between instruction following or more creative solutions.<p>Codex models tend to be extremely good at following instructions, to the point that it won't do any additional work unless you ask it to. GPT-5.1 and GPT-5.2 on the other hand is a little bit more creative.<p>Models from Anthropics on the other hand is a lot more loosy goosy on the instructions, and you need to keep an eye on it much more often.<p>I'm using models interchangeably from both providers all the time depending on the task at hand. No real preference if one is better then the other, they're just specialized on different things</p>
]]></description><pubDate>Sun, 21 Dec 2025 23:29:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46349726</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=46349726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46349726</guid></item><item><title><![CDATA[New comment by asabla in "Fifty Shades of OOP"]]></title><description><![CDATA[
<p>This is such a good video. I really like the way he presents it as well.<p>His rant about CS historians is also a fun subject</p>
]]></description><pubDate>Tue, 25 Nov 2025 05:04:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46042508</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=46042508</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46042508</guid></item><item><title><![CDATA[New comment by asabla in "AI can code, but it can't build software"]]></title><description><![CDATA[
<p>> The last thing I'll mention is that Claude Code (Sonnet 4.5) is still very token-happy, in that it eagerly goes above and beyond when not always necessary. Codex (gpt-5-codex) on the other hand, does exactly what you ask, almost to a fault.<p>I very much share your experience. As for the time being I like the experience with codex over claude, just because I find my self in a position where I know much sooner when to step in and just doing it manually.<p>With claude I find my self in a typing exercise much more often, I could probably get better of knowing when to stop ofc.</p>
]]></description><pubDate>Tue, 28 Oct 2025 06:59:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45729867</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=45729867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45729867</guid></item><item><title><![CDATA[New comment by asabla in "A proposal to add GC-less, unmanaged memory spaces to C#"]]></title><description><![CDATA[
<p>I can't tell if this is satire or not. And some parts read like it was written by AI.<p>Either way, a more fine grained control over the GC is probably preferred over something like this.</p>
]]></description><pubDate>Sun, 28 Sep 2025 08:04:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45402591</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=45402591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45402591</guid></item><item><title><![CDATA[New comment by asabla in "GPT-OSS Reinforcement Learning"]]></title><description><![CDATA[
<p>I'm always so confused by those statements as well. Because just like you, I feel that the 20B version is really good at following instructions.<p>Some of the qwen models are too, but they seem to need a bit more handholding.<p>This is of course just anecdotal from my end. And I've been slacking on keeping up with evals while testing at home</p>
]]></description><pubDate>Sat, 27 Sep 2025 06:28:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45393593</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=45393593</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45393593</guid></item><item><title><![CDATA[New comment by asabla in "Are OpenAI and Anthropic losing money on inference?"]]></title><description><![CDATA[
<p>And by GPT-5 you mean through their API? Directly through Azure OpenAI services? or are you talking about ChatGPT set to using GPT-5.<p>All of these alternatives means different things when you say it takes +20 seconds for a full response.</p>
]]></description><pubDate>Thu, 28 Aug 2025 16:31:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=45054137</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=45054137</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45054137</guid></item><item><title><![CDATA[New comment by asabla in "The issue of anti-cheat on Linux (2024)"]]></title><description><![CDATA[
<p>I fundamentally agree with you.<p>But anti-cheat hasn't been about blocking every possible way of cheating for some time now. It's been about making it as in convenient as possible, thus reducing the amount of cheaters.<p>Is the current fad of using kernel level anti-cheats what we want? hell nah.<p>The responsibility of keeping a multi-player session clean of cheaters, was previously shared between the developers and server owners. While today this responsibility has fallen mostly on developers (or rather game studios) since they want to own the whole experience.</p>
]]></description><pubDate>Sat, 23 Aug 2025 10:45:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44994967</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44994967</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44994967</guid></item><item><title><![CDATA[New comment by asabla in "From M1 MacBook to Arch Linux: A month-long experiment that became permanenent"]]></title><description><![CDATA[
<p>I think so far some of the surface devices and some of the Razer (yes, the one making computer mices, keyboards and such) had been the closest.</p>
]]></description><pubDate>Sat, 23 Aug 2025 06:52:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44993868</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44993868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44993868</guid></item><item><title><![CDATA[New comment by asabla in "I run a full Linux desktop in Docker just because I can"]]></title><description><![CDATA[
<p>I still remember how much I liked the idea. Really tried to use it, but the experience with both browsers and vscode was....not that great.<p>Kinda hope they revisit this idea in a near future again</p>
]]></description><pubDate>Sat, 23 Aug 2025 06:41:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=44993806</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44993806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44993806</guid></item><item><title><![CDATA[New comment by asabla in "AGENTS.md – Open format for guiding coding agents"]]></title><description><![CDATA[
<p>Been using a similar setup, with so far pretty decent results. With the addition of having a short explanation for each file within index.md<p>I've been experimenting with having a rules.md file within each directory where I want a certain behavior. Example, let us say I have a directory with different kind of services like realtime-service.ts and queue-service.ts, I then have a rules.md file on the same level as they are.<p>This lets me scaffold things pretty fast when prompting by just referencing that file. The name is probably not the best tho.</p>
]]></description><pubDate>Wed, 20 Aug 2025 01:23:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44957836</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44957836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44957836</guid></item><item><title><![CDATA[New comment by asabla in "Counter-Strike: A billion-dollar game built in a dorm room"]]></title><description><![CDATA[
<p>Squad, Arma (and especially Arma reforger), Dayz, Battlebit, Heretic + Hexen and thr list goes on.<p>Arma usually gets the more complex and janky stuff (in a fun way). While the others are more modified experiences.<p>Like Squad, we're they've re-created star wars battlefront</p>
]]></description><pubDate>Mon, 18 Aug 2025 17:35:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44943246</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44943246</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44943246</guid></item><item><title><![CDATA[New comment by asabla in "Counter-Strike: A billion-dollar game built in a dorm room"]]></title><description><![CDATA[
<p>Mostly AA and indie game titles. The simulator scene is still going strong with dedicated servers (like squad, arma, farming simulator, the hunter etc etc).<p>Larger titles swapped over to more control in order to extract more money from the players, but also control the experience.<p>There is however some AAA titles every now and then which support hosting your own servers. But they're quite few these days</p>
]]></description><pubDate>Mon, 18 Aug 2025 17:30:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44943175</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44943175</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44943175</guid></item><item><title><![CDATA[New comment by asabla in "GPT-OSS vs. Qwen3 and a detailed look how things evolved since GPT-2"]]></title><description><![CDATA[
<p>> that rely on tool use for facts, and “knowledge bases” tuned for retrieval-heavy work<p>I would say this isn't exclusive to the smaller OSS models. But rather a trait of Openai's models all together now.<p>This becomes especially apparent with the introduction of GPT-5 in ChatGPT. Their focus on routing your request to different modes and searching the web automatically (relying on an Agentic workflows in the background) is probably key to the overall quality of the output.<p>So far, it's quite easy to get their OSS models to follow instructions reliably. Qwen models has been pretty decent at this too for some time now.<p>I think if we give it another generation or two, we're at the point of having compotent enough models to start running more advanced agentic workflows. On modest hardware. We're almost there now, but not quite yet</p>
]]></description><pubDate>Mon, 11 Aug 2025 06:17:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=44861207</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44861207</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44861207</guid></item><item><title><![CDATA[New comment by asabla in "Running GPT-OSS-120B at 500 tokens per second on Nvidia GPUs"]]></title><description><![CDATA[
<p>Initial testing has only been done with ollama. Plan on testing out llama.cpp and vllm when there is enough time</p>
]]></description><pubDate>Thu, 07 Aug 2025 07:34:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44821627</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44821627</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44821627</guid></item><item><title><![CDATA[New comment by asabla in "Running GPT-OSS-120B at 500 tokens per second on Nvidia GPUs"]]></title><description><![CDATA[
<p>I'm on a 5090 so it's not apples to apples comparison. But I'm getting ~150t/s for the 20B version using ~16000 context size.</p>
]]></description><pubDate>Thu, 07 Aug 2025 05:48:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44821026</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44821026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44821026</guid></item><item><title><![CDATA[New comment by asabla in "Building MCP servers for ChatGPT and API integrations"]]></title><description><![CDATA[
<p>For ChatGPT and DeepResearch yes, not when using the API. I guess you could just return empty results if you want to offer other tools as well (can't test it now, since custom connectors only supports Workspace or PRO accounts for this moment).<p>Quote we're talking about:
> To work with ChatGPT Connectors or deep research (in ChatGPT or via API), your MCP server must implement two tools - search and fetch.<p>Reference links:<p>- Using remote MCP servers with the API: <a href="https://platform.openai.com/docs/guides/tools-remote-mcp" rel="nofollow">https://platform.openai.com/docs/guides/tools-remote-mcp</a><p>- Which account types can setup custom connectors in ChatGPT: <a href="https://help.openai.com/en/articles/11487775-connectors-in-chatgpt#h_7abedb137d" rel="nofollow">https://help.openai.com/en/articles/11487775-connectors-in-c...</a></p>
]]></description><pubDate>Thu, 24 Jul 2025 22:57:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44677326</link><dc:creator>asabla</dc:creator><comments>https://news.ycombinator.com/item?id=44677326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44677326</guid></item></channel></rss>