<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: foundry27</title><link>https://news.ycombinator.com/user?id=foundry27</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 10:30:17 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=foundry27" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by foundry27 in "Kimi vendor verifier – verify accuracy of inference providers"]]></title><description><![CDATA[
<p>I like this idea. This might be one of the more effective social pressures available for getting inference providers to fix long-standing issues. AWS Bedrock, for example, has crippling defects in its serving stack for Kimi’s K2 and K2.5 models that cause 20%-30% of attempts to emit tool calls to instead silently end the conversation (with no token output). That makes AWS effectively irrelevant as a serious inference provider for Kimi, and conveniently pushes users onto Bedrock’s significantly more expensive Anthropic models for comparable performance on agentic tasks.</p>
]]></description><pubDate>Mon, 20 Apr 2026 22:16:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=47841735</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=47841735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47841735</guid></item><item><title><![CDATA[New comment by foundry27 in "Privacy doesn't mean anything anymore, anonymity does"]]></title><description><![CDATA[
<p>I’m not sure if this is just an “on mobile” thing, but I can’t find any reference to ISO 27001 or SOC2 at that datacentres URL. Taking your word for it being there previously, this seems like a major red flag! Faking these certs is no joke, and silently removing references to that after being called out would be even more of a bad look.<p>@ybceo you seemed to represent this org based on your previous comments, is the parent commenter missing something here?</p>
]]></description><pubDate>Sat, 20 Dec 2025 23:50:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=46340847</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=46340847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46340847</guid></item><item><title><![CDATA[New comment by foundry27 in "New California law restricts HOA fines to $100 per violation"]]></title><description><![CDATA[
<p>I get it, really I do. But do the HOAs really need financial enforcement mechanisms  intended to seriously harm people, and to punish them as judge, jury and executioner? A HOA’s legal job is to maintain the common-interest property and enforce the CC&Rs. It is not a HOA’s job to extract enormous sums of money out of its members, even annoying ones. The right lever to pull to get some rich person partying at 4am and trashing the place (for example) to stop is for the HOA to file for a court injunction after repeated violations; once a judge orders “no loud music 10 pm - 7 am”, the next 4 am party will become contempt of court, which is a problem for the cops, not the HOA. Hell, 4 a.m. noise is a municipal nuisance and probably a crime; people should be calling the cops every time it happens. Individual members could even sue the owner in small-claims court for private nuisance, where judges can issue even more injunctions or award damages.
All this to say, you don’t need to take people’s money to get them to stop doing bad stuff. But you do need to take people’s money to get rich, and to hurt people. This new legislation should be deeply concerning to people interested in the latter, and IMO shouldn’t really be a concern to people interested in the former.</p>
]]></description><pubDate>Sat, 04 Oct 2025 22:39:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45477387</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=45477387</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45477387</guid></item><item><title><![CDATA[New comment by foundry27 in "Open models by OpenAI"]]></title><description><![CDATA[
<p>Model cards, for the people interested in the guts: <a href="https://cdn.openai.com/pdf/419b6906-9da6-406c-a19d-1bb078ac7637/oai_gpt-oss_model_card.pdf" rel="nofollow">https://cdn.openai.com/pdf/419b6906-9da6-406c-a19d-1bb078ac7...</a><p>In my mind, I’m comparing the model architecture they describe to what the leading open-weights models (Deepseek, Qwen, GLM, Kimi) have been doing. Honestly, it just seems “ok” at a technical level:<p>- both models use standard Grouped-Query Attention (64 query heads, 8 KV heads). The card talks about how they’ve used an older optimization from GPT3, which is alternating between banded window (sparse, 128 tokens) and fully dense attention patterns. It uses RoPE extended with YaRN (for a 131K context window). So they haven’t been taking advantage of the special-sauce Multi-head Latent Attention from Deepseek, or any of the other similar improvements over GQA.<p>- both models are standard MoE transformers. The 120B model (116.8B total, 5.1B active) uses 128 experts with Top-4 routing. They’re using some kind of Gated SwiGLU activation, which the card talks about as being "unconventional" because of to clamping and whatever residual connections that implies. Again, not using any of Deepseek’s “shared experts” (for general patterns) + “routed experts” (for specialization) architectural improvements, Qwen’s load-balancing strategies, etc.<p>- the most interesting thing IMO is probably their quantization solution. They did something to quantize >90% of the model parameters to the MXFP4 format (4.25 bits/parameter) to let the 120B model to fit on a single 80GB GPU, which is pretty cool. But we’ve also got Unsloth with their famous 1.58bit quants :)<p>All this to say, it seems like even though the training they did for their agentic behavior and reasoning is undoubtedly very good, they’re keeping their actual technical advancements “in their pocket”.</p>
]]></description><pubDate>Tue, 05 Aug 2025 17:57:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44801714</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=44801714</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44801714</guid></item><item><title><![CDATA[New comment by foundry27 in "AWS deleted my 10-year account and all data without warning"]]></title><description><![CDATA[
<p>It’s easy to be fooled, myself included it seems :)<p>For context, here’s a handful of the ChatGPT cues I see.<p>- “wasn’t just my backup—it was my clean room for open‑source development”
- “wasn’t standard AWS incompetence; this was something else entirely”
- “you’re not being targeted; you’re being algorithmically categorized”
- “isn’t a system failure; the architecture and promises are sound”
- “This isn’t just about my account. It’s about what happens when […]”
- “This wasn’t my production infrastructure […] it was my launch pad for updating other infrastructure”
- “The cloud isn’t your friend. It’s a business”<p>I counted about THIRTY em-dashes, which any frequent generative AI user would understand to be a major tell. It’s got an average word count in each sentence of around ~11 (try to write with only 11 words in each sentence, and you’ll see why this is silly), and much of the article consists of brief, punchy sentences separated by periods or question marks, which is the classic ChatGPT prose style. For crying out loud, it even has a <i>table</i> with quippy one-word cell contents at the end of the article like what ChatGPT generates 9/10 times when asked for a comparison of two things.<p>It’s just disappointing. The author is undermining his own credibility for what would otherwise be a very real problem, and again, his real writing style when you read his actual written work is great.</p>
]]></description><pubDate>Sun, 03 Aug 2025 00:21:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44772960</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=44772960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44772960</guid></item><item><title><![CDATA[New comment by foundry27 in "AWS deleted my 10-year account and all data without warning"]]></title><description><![CDATA[
<p>Editorial comment: It’s a bit weird to see AI-written (at least partially; you can see the usual em-dashes, it’s-not-X-it’s-Y) blog posts like this detract from an author’s true writing style, which in this case I found significantly more pleasant to read. Read his first ever post, and compare it to this one and many of the other recent posts: <a href="https://www.seuros.com/blog/noflylist-how-noflylist-got-cleared-for-production-3cgf/" rel="nofollow">https://www.seuros.com/blog/noflylist-how-noflylist-got-clea...</a><p>I’m not much of a conspiracy theorist, but I could imagine a blog post almost identical to this one being generated in response to a prompt like “write a first-person narrative about: a cloud provider abruptly deleting a decade-old account and all associated data without warning. Include a plot twist”.<p>I literally cannot tell if this story is something that really happened or not. It scares me a little, because if this was a real problem and I was in the author’s shoes, I would want people to believe me.</p>
]]></description><pubDate>Sat, 02 Aug 2025 22:44:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44772313</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=44772313</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44772313</guid></item><item><title><![CDATA[New comment by foundry27 in "Gemini 2.5 Deep Think"]]></title><description><![CDATA[
<p>I started doing some experimentation with this new Deep Think agent, and after five prompts I reached my daily usage limit. For $250 USD/mo that’s what you’ll be getting folks.<p>It’s just bizarrely uncompetitive with o3-pro and Grok 4 Heavy. Anecdotally (from my experience) this was the one feature that enthusiasts in the AI community were interested in to justify the exorbitant price of Google’s Ultra subscription. I find it astonishing that the same company providing <i>free</i> usage of their top models to everybody via AI Studio is nickel-and-diming their actual customers like that.<p>Performance-wise. So far, I couldn’t even tell. I provided it with a challenging organizational problem that my business was facing, with the relevant context, and it proposed a lucid and well-thought-out solution that was consistent with our internal discussions on the matter. But o3 came to an equally effective conclusion for a fraction of the cost, even if it was less “cohesive” of a report. I guess I’ll have to wait until tomorrow to learn more.</p>
]]></description><pubDate>Fri, 01 Aug 2025 14:31:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=44757363</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=44757363</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44757363</guid></item><item><title><![CDATA[New comment by foundry27 in "Show HN: Klaro Budget – Budgeting based on pay schedules"]]></title><description><![CDATA[
<p>I couldn’t even make it past the onboarding! There didn’t seem to be an option for the single most common pay schedule, bi-weekly</p>
]]></description><pubDate>Thu, 26 Jun 2025 20:47:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44391178</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=44391178</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44391178</guid></item><item><title><![CDATA[New comment by foundry27 in "Show HN: Turn a paper's DOI into its full reference list (BibTeX/RIS, etc.)"]]></title><description><![CDATA[
<p>Did you just use a LLM to write this reply?</p>
]]></description><pubDate>Sun, 22 Jun 2025 22:52:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44350958</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=44350958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44350958</guid></item><item><title><![CDATA[New comment by foundry27 in "First thoughts on o3 pro"]]></title><description><![CDATA[
<p>I find it amusingly ironic how one comment under yours is pointing out that there’s a mistake in the model output, and the other comment under yours trusts that it’s correct but says that it isn’t “real reasoning” anyways because it knows the algorithm. There’s probably something about moving goalposts to be said here</p>
]]></description><pubDate>Thu, 12 Jun 2025 20:57:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=44263120</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=44263120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44263120</guid></item><item><title><![CDATA[New comment by foundry27 in "The Price of Remission"]]></title><description><![CDATA[
<p>What kind of focus do biopharma companies put on their stock prices? If a company like the one you described had a great treatment option that could genuinely help people and was raking in money by the boatload, is that “enough” for them as a “winning” business strategy regardless of how outside investors might perceive it?</p>
]]></description><pubDate>Sat, 10 May 2025 23:45:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=43949984</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=43949984</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43949984</guid></item><item><title><![CDATA[New comment by foundry27 in "Qwen3: Think deeper, act faster"]]></title><description><![CDATA[
<p>I find the situation the big LLM players find themselves in quite ironic. Sam Altman promised (edit: under duress, from a twitter poll gone wrong) to release an open source model at the level of o3-mini to catch up to the perceived OSS supremacy of Deepseek/Qwen. Now Qwen3’s release makes a model that’s “only” equivalent to o3-mini effectively dead on arrival, both socially and economically.</p>
]]></description><pubDate>Mon, 28 Apr 2025 22:53:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=43826988</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=43826988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43826988</guid></item><item><title><![CDATA[New comment by foundry27 in "Watching o3 model sweat over a Paul Morphy mate-in-2"]]></title><description><![CDATA[
<p>I just tried the same puzzle in o3 using the same image input, but tweaked the prompt to say “don’t use the search tool”. Very similar results!<p>It spent the first few minutes analyzing the image and cross-checking various slices of the image to make sure it understood the problem. Then it spent the next 6-7 minutes trying to work through various angles to the problem analytically. It decided this was likely a mate-in-two (part of the training data?), but went down the path that the key to solving the problem would be to convert the position to something more easily solvable first. At that point it started trying to pip install all sorts of chess-related packages, and when it couldn’t get that to work it started writing a simple chess solver in Python by hand (which didn’t work either). At one point it thought the script had found a mate-in-six that turned out to be due to a script bug, but I found it impressive that it didn’t just trust the script’s output - instead it analyzed the proposed solution and determined the nature of the bug in the script that caused it. Then it gave up and tried analyzing a bit more for five more minutes, at which point the thinking got cut off and displayed an internal error.<p>15 minutes total, didn’t solve the problem, but fascinating! There were several points where if the model were more “intelligent”, I absolutely could see it reasoning it out following the same steps.</p>
]]></description><pubDate>Mon, 28 Apr 2025 00:54:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43816496</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=43816496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43816496</guid></item><item><title><![CDATA[New comment by foundry27 in "Cheating the Reaper in Go"]]></title><description><![CDATA[
<p>tl;dr for anyone who may be put off by the article length:<p>OP built an arena allocator in Go using unsafe to speed allocator operations up, especially for cases when you're allocating a bunch of stuff that you know lives and dies together. The main issue they ran into is that Go's GC needs to know the layout of your data (specifically, where pointers are) to work correctly, and if you just allocate raw bytes with unsafe.Pointer, the GC might mistakenly free things pointed to from your arena because it can't see those pointers properly. But to make it work even with pointers (as long as they point to other stuff in the same arena), you keep the whole arena alive if any part of it is still referenced. That means (1) keeping a slice (chunks) pointing to all the big memory blocks the arena got from the system, and (2) using reflect.StructOf to create new types for these blocks that include an extra pointer field at the end (pointing back to the Arena). So if the GC finds any pointer into a chunk, it’ll also find the back-pointer, therefore mark the arena as alive, and therefore keep the chunks slice alive. Then they get into a bunch of really interesting optimizations to remove various internal checks and and write barriers using funky techniques you might not've seen before</p>
]]></description><pubDate>Tue, 22 Apr 2025 02:20:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43758534</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=43758534</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43758534</guid></item><item><title><![CDATA[New comment by foundry27 in "Gemini 2.5"]]></title><description><![CDATA[
<p>I believe this is out of date. There’s a very explicit opt in/out slider for permitting training on conversations that doesn’t seem to affect conversation history retention.</p>
]]></description><pubDate>Wed, 26 Mar 2025 12:25:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43481422</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=43481422</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43481422</guid></item><item><title><![CDATA[New comment by foundry27 in "Making o1, o3, and Sonnet 3.7 hallucinate for everyone"]]></title><description><![CDATA[
<p>It’s always a touch ironic when AI-generated replies such as this one are submitted under posts about AI. Maybe that’s secretly the the self-reflection feedback loop we need for AGI :)</p>
]]></description><pubDate>Sat, 01 Mar 2025 19:25:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43222702</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=43222702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43222702</guid></item><item><title><![CDATA[New comment by foundry27 in "Xonsh – A Python-powered shell"]]></title><description><![CDATA[
<p>I’ve been using xonsh as my daily driver for a few years now, and it’s a massive productivity booster!<p>Broadly speaking I’ve found that most of the reported compatibility and usability concerns in their GitHub issues have boiled down to user error, rather than any kind of a defect with the shell itself. That’s not to say there aren’t any issues, but they’re few and far between, and it’s more than solid enough for regular use. It isn’t bash, and you shouldn’t expect to execute a bash script with xonsh or use bash idioms (even though some compatibility layers exist for e.g. translating your ~/.bashrc and sourcing envvars from bash scripts).</p>
]]></description><pubDate>Tue, 25 Feb 2025 20:28:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43176948</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=43176948</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43176948</guid></item><item><title><![CDATA[New comment by foundry27 in "Opposing arrows of time can theoretically emerge from certain quantum systems"]]></title><description><![CDATA[
<p>Barbour is criminally underrated as a physics author. He’s published a lot of interesting ideas regarding the role of time, or lack thereof, in modern theories! (The End of Time, and its treatment of Causality as a direct substitute for time in any future theory of everything, was very fun)</p>
]]></description><pubDate>Mon, 17 Feb 2025 01:46:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43073998</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=43073998</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43073998</guid></item><item><title><![CDATA[New comment by foundry27 in "AI Demos"]]></title><description><![CDATA[
<p>If that’s how it’s being advertised, and that’s the reason people are giving it a shot based on that advertising, then I certainly do! And so, I imagine, did the people who have left feedback so far!</p>
]]></description><pubDate>Sun, 09 Feb 2025 23:57:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42995432</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=42995432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42995432</guid></item><item><title><![CDATA[New comment by foundry27 in "PhD Knowledge Not Required: A Reasoning Challenge for Large Language Models"]]></title><description><![CDATA[
<p>I get that sliding in  references to a passion project on top-scoring articles might seem like an easy way to give the project exposure, but commenting the same thing over and over comes off as a bit boorish. And just plugging the URL isn’t really contributing anything to the discussions IMO. Why not show us something your tool explained or summarized from the articles that isn’t obvious from a cursory read? Citing the tool as the source for something cool wouldn’t be nearly as in-your-face.</p>
]]></description><pubDate>Sun, 09 Feb 2025 19:17:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=42992903</link><dc:creator>foundry27</dc:creator><comments>https://news.ycombinator.com/item?id=42992903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42992903</guid></item></channel></rss>