<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: alex_sf</title><link>https://news.ycombinator.com/user?id=alex_sf</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 23:43:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=alex_sf" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by alex_sf in "US and Iran agree to provisional ceasefire"]]></title><description><![CDATA[
<p>This isn't buried or hard to find, but in good faith:<p><a href="https://www.dni.gov/files/ODNI/documents/assessments/ODNI-Unclassified-Irans-Nuclear-Weapons-Capability-and-Terrorism-Monitoring-Act-of-2022-202407.pdf" rel="nofollow">https://www.dni.gov/files/ODNI/documents/assessments/ODNI-Un...</a><p><a href="https://www.dni.gov/index.php/newsroom/congressional-testimonies/congressional-testimonies-2023/3685-dni-haines-opening-statement-on-the-2023-annual-threat-assessment-of-the-u-s-intelligence-community" rel="nofollow">https://www.dni.gov/index.php/newsroom/congressional-testimo...</a></p>
]]></description><pubDate>Wed, 08 Apr 2026 05:06:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47685522</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=47685522</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47685522</guid></item><item><title><![CDATA[New comment by alex_sf in "US and Iran agree to provisional ceasefire"]]></title><description><![CDATA[
<p>> Saying "Accidents happen in war" is absolutely a way of saying "Accidents are acceptable in war".<p>Bridges fall down sometimes.  I don't think it's acceptable.  It's a statement of fact.  There are always going to be mistakes, in every field and in pursuit of every goal.  Your objection and implications aren't particularly charitable here.<p>> My "brilliant" plan would have been the negotiations that were happening where Iran agreed to pretty strict monitoring and stipulations on nuclear fuel development.<p>Iran was not complying with the monitoring requirements.<p>> The "Iran was getting nukes" rhetoric needs real evidence that was actually happening not "we think that might be happening because Trump said so."<p>Intelligence agencies under both Biden and Trump (and since at least the 90s) have repeatedly confirmed it.<p>This isn't really a question or doubt any reasonable person can have.  There can be an argument about how close they are at any given moment, but they are actively pursuing nuclear weapons.</p>
]]></description><pubDate>Wed, 08 Apr 2026 03:21:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47684675</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=47684675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47684675</guid></item><item><title><![CDATA[New comment by alex_sf in "1M context is now generally available for Opus 4.6 and Sonnet 4.6"]]></title><description><![CDATA[
<p>It's both.</p>
]]></description><pubDate>Sat, 14 Mar 2026 20:45:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47380979</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=47380979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47380979</guid></item><item><title><![CDATA[New comment by alex_sf in "ATMs didn’t kill bank teller jobs, but the iPhone did"]]></title><description><![CDATA[
<p>If goods aren't being sold, then the price will drop.</p>
]]></description><pubDate>Thu, 12 Mar 2026 15:43:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47352452</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=47352452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47352452</guid></item><item><title><![CDATA[New comment by alex_sf in "Vercel's CEO offers to cover expenses of 'Jmail'"]]></title><description><![CDATA[
<p>That's not worth 45k.  It's barely worth anything for a typical website, tbh.</p>
]]></description><pubDate>Tue, 10 Feb 2026 20:58:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=46966820</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=46966820</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46966820</guid></item><item><title><![CDATA[New comment by alex_sf in "Cloudflare claimed they implemented Matrix on Cloudflare workers. They didn't"]]></title><description><![CDATA[
<p>Tbf, there is no one with a ‘serious DevSecOps background’.  It’s an incredibly strong hint that the person is largely a goof.</p>
]]></description><pubDate>Tue, 27 Jan 2026 17:54:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46783594</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=46783594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46783594</guid></item><item><title><![CDATA[New comment by alex_sf in "Gas Town's agent patterns, design bottlenecks, and vibecoding at scale"]]></title><description><![CDATA[
<p>> Ralph loops are also stupid because they don't make use of kv cache properly.<p>This is a cost/resources thing.  If it's more effective and the resources are available, it's completely fine.</p>
]]></description><pubDate>Fri, 23 Jan 2026 17:14:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46734977</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=46734977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46734977</guid></item><item><title><![CDATA[New comment by alex_sf in "Don Knuth plays with ChatGPT"]]></title><description><![CDATA[
<p>> If you are not too tired, drunk or using any substances, and not speeding, your chances of causing a serious traffic accident are miniscule.<p>You realize that like.. other people exist, right?</p>
]]></description><pubDate>Mon, 22 May 2023 02:51:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=36026907</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=36026907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36026907</guid></item><item><title><![CDATA[New comment by alex_sf in "Don Knuth plays with ChatGPT"]]></title><description><![CDATA[
<p>> Would you sign up for such a system if you can volunteer to participate in it, with now those random killings being restricted to those who've signed up for it, including you?<p>I mean, we already have.  You volunteer to participate in a system where ~40k people die in the US every year by engaging in travel on public roadways.  If self-driving reduces that to 10k, that's a win.  You're not really making any sense.</p>
]]></description><pubDate>Mon, 22 May 2023 02:50:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=36026903</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=36026903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36026903</guid></item><item><title><![CDATA[New comment by alex_sf in "Don Knuth plays with ChatGPT"]]></title><description><![CDATA[
<p>If the goal is to reduce the number of fatal mistakes, why is that argument garbage?</p>
]]></description><pubDate>Sun, 21 May 2023 03:44:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=36017836</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=36017836</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36017836</guid></item><item><title><![CDATA[New comment by alex_sf in "Don Knuth plays with ChatGPT"]]></title><description><![CDATA[
<p>Taking RLHF into account: it's not actually generating the most plausible completion, it's generating one that's worse.</p>
]]></description><pubDate>Sun, 21 May 2023 03:42:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=36017824</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=36017824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36017824</guid></item><item><title><![CDATA[New comment by alex_sf in "US Supreme Court leaves protections for internet companies unscathed"]]></title><description><![CDATA[
<p>> A fairly reliable determinant for how the Court will rule is found using a materialist analysis. That is, the Court will generally side with corporations and capital owners when given the choice.<p>This is a big claim.  Do you have any evidence to support it?<p>In the wake of someone trying to prove the same for Congress, it was conclusively shown that the opposite was true:<p><a href="https://www.vox.com/2016/5/9/11502464/gilens-page-oligarchy-study" rel="nofollow">https://www.vox.com/2016/5/9/11502464/gilens-page-oligarchy-...</a><p>I see several opinion pieces making the same claim, but no actual studies of their decisions.<p>More importantly: the concern can't and shouldn't be the income of the parties involved in a suit, but who is right and who isn't.</p>
]]></description><pubDate>Fri, 19 May 2023 21:12:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=36006909</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=36006909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36006909</guid></item><item><title><![CDATA[New comment by alex_sf in "Rocky Linux 8.8 Available Now"]]></title><description><![CDATA[
<p>Rocky and CentOS are both based on Red Hat Enterprise Linux (RHEL).<p>CentOS used to be a free and open source downstream version of RHEL.  Keeping the history short: Red Hat effectively acquired CentOS and discontinued it as a downstream version of RHEL.  They turned it into 'CentOS Stream', which is, more or less, a continuously delivered upstream version of RHEL.  This isn't acceptable for a large number of the CentOS user base.<p>One of the original founders of CentOS, Gregory Kurtzer, started Rocky as an alternative.  It's basically what CentOS used to be: a free and open source downstream version of RHEL.</p>
]]></description><pubDate>Fri, 19 May 2023 21:03:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=36006811</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=36006811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36006811</guid></item><item><title><![CDATA[New comment by alex_sf in "Show HN: Oblivus GPU Cloud – Affordable and scalable GPU servers from $0.29/hr"]]></title><description><![CDATA[
<p>Lambda availability is awful.</p>
]]></description><pubDate>Tue, 16 May 2023 16:29:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=35964175</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=35964175</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35964175</guid></item><item><title><![CDATA[New comment by alex_sf in "GitHub Copilot Chat Leaked Prompt"]]></title><description><![CDATA[
<p>> It is definitely not an intended feature for the end user to be able to trick the model into believing it said something it didn't say. It also doesn't work with ChatGPT or Bing Chat, as far as I can tell. I was talking about the user, not about the developer.<p>Those aren't models, they are applications built on top of models.<p>> That can be done with special tokens also. The difference is that the user can't enter those tokens themselves.<p>Sure.  But there are no open models that do that, and no indication of whether the various closed models do it either.</p>
]]></description><pubDate>Sat, 13 May 2023 22:38:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=35933475</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=35933475</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35933475</guid></item><item><title><![CDATA[New comment by alex_sf in "GitHub Copilot Chat Leaked Prompt"]]></title><description><![CDATA[
<p>In all the open source cases I’m aware of, the roles are just normal text.<p>The ability to trivially trick the model into thinking it said something it didn’t is a feature and intentional.  It’s how you do multi-turn conversations with context.<p>Since the current crop of LLMs have no memory of their interaction, each follow up message (the back and forth of a conversation) involves sending the entire history back into the model, with the role as a prefix for each participants output/input.<p>There are some special tokens used (end of sequence, etc).<p>If your product doesn’t directly expose the underlying model, you can try to prevent users from impersonating responses through obfuscation or the LLM equivalent of prepared statements.  The offensive side of prompt injection is currently beating the defensive side, though.</p>
]]></description><pubDate>Sat, 13 May 2023 16:00:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=35929647</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=35929647</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35929647</guid></item><item><title><![CDATA[New comment by alex_sf in "GitHub Copilot Chat Leaked Prompt"]]></title><description><![CDATA[
<p>A really good token predictor is still a token predictor.</p>
]]></description><pubDate>Sat, 13 May 2023 15:51:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=35929551</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=35929551</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35929551</guid></item><item><title><![CDATA[New comment by alex_sf in "GitHub Copilot Chat Leaked Prompt"]]></title><description><![CDATA[
<p>It sounds like we both know that's the case, but there's a ton of incorrect info being shared in this thread re: RLHF and instruction tuning.<p>Sorry if it came off as more than looking to clarify it for folks coming across it.</p>
]]></description><pubDate>Sat, 13 May 2023 04:48:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=35925644</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=35925644</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35925644</guid></item><item><title><![CDATA[New comment by alex_sf in "GitHub Copilot Chat Leaked Prompt"]]></title><description><![CDATA[
<p>It's not being pedantic.  RLHF and instruction tuning are completely different things.  Painting with watercolors does not make water paint.<p>Nearly all popular local models are instruction tuned, but are not RLHF'd.  The OAI GPT series are not the only LLMs in the world.</p>
]]></description><pubDate>Sat, 13 May 2023 04:32:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=35925543</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=35925543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35925543</guid></item><item><title><![CDATA[New comment by alex_sf in "GitHub Copilot Chat Leaked Prompt"]]></title><description><![CDATA[
<p>Instruction tuning is distinct from RLHF.  Instruction tuning teaches the model to understand and respond (in a sensible way) to instructions, versus 'just' completing text.<p>RLHF trains a model to adjust it's output based on a reward model.  The reward model is trained from human feedback.<p>You can have an instruction tuned model with no RLHF, RLHF with no instruction tuning, or instruction tuning and RLHF.  Totally orthogonal.</p>
]]></description><pubDate>Sat, 13 May 2023 03:53:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=35925337</link><dc:creator>alex_sf</dc:creator><comments>https://news.ycombinator.com/item?id=35925337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35925337</guid></item></channel></rss>