<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: scott_s</title><link>https://news.ycombinator.com/user?id=scott_s</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 04:29:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=scott_s" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by scott_s in "We might all be AI engineers now"]]></title><description><![CDATA[
<p>You are correct, but this is not a new role. AI effectively makes all of us tech leads.</p>
]]></description><pubDate>Fri, 06 Mar 2026 19:17:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47279721</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=47279721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47279721</guid></item><item><title><![CDATA[New comment by scott_s in "We might all be AI engineers now"]]></title><description><![CDATA[
<p>That's not what the author means. Multiple times a day, I have conversations with LLMs about specific code or general technologies. It is very similar to having the same conversation with a colleague. Yes, the LLM may be wrong. Which is why I'm constantly looking at the code myself to see if the explanation makes sense, or finding external docs to see if the concepts check out.<p>Importantly, the LLM is not writing code for me. It's explaining things, and I'm coming away with verifiable facts and conceptual frameworks I can apply to my work.</p>
]]></description><pubDate>Fri, 06 Mar 2026 19:15:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47279683</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=47279683</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47279683</guid></item><item><title><![CDATA[New comment by scott_s in "What's up with all those equals signs anyway?"]]></title><description><![CDATA[
<p>I think of, and look up, this drunken rant at least once a year.</p>
]]></description><pubDate>Tue, 03 Feb 2026 15:53:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46872581</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=46872581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46872581</guid></item><item><title><![CDATA[New comment by scott_s in "ACM Is Now Open Access"]]></title><description><![CDATA[
<p>Yes, and that peer review happens <i>through</i> the ACM. It serves an organizing function. The conferences themselves are also in-person events, and most of the important research papers come out of those conferences.</p>
]]></description><pubDate>Thu, 08 Jan 2026 02:32:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46536464</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=46536464</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46536464</guid></item><item><title><![CDATA[New comment by scott_s in "ACM Is Now Open Access"]]></title><description><![CDATA[
<p>It doesn't. arXiv is exclusively a pre-print service. The ACM digital library is for peer-reviewed, published papers. All of the peer-review happens through the ACM, as well as the physical conferences where people present and publish their papers.</p>
]]></description><pubDate>Sat, 03 Jan 2026 00:54:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46471531</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=46471531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46471531</guid></item><item><title><![CDATA[New comment by scott_s in "ACM Is Now Open Access"]]></title><description><![CDATA[
<p>IEEE may do it, as it's a professional organization. That is, they're a non-profit dedicated to the furtherance of the field. Being open access fits their mission, and the costs can be handled by dues and fees. Springer and Elsevier are for-profit publishers. I don't know how if they can have an open-access business model.</p>
]]></description><pubDate>Thu, 01 Jan 2026 16:52:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46455577</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=46455577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46455577</guid></item><item><title><![CDATA[New comment by scott_s in "ACM Is Now Open Access"]]></title><description><![CDATA[
<p>Great news. They temporarily opened it in 2020 during the pandemic. I argued it should remain so in a post: <a href="https://www.scott-a-s.com/acm-digital-library-should-remain-open" rel="nofollow">https://www.scott-a-s.com/acm-digital-library-should-remain-...</a>. I'm glad it's finally happened.</p>
]]></description><pubDate>Thu, 01 Jan 2026 16:49:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46455543</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=46455543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46455543</guid></item><item><title><![CDATA[New comment by scott_s in "I ignore the spotlight as a staff engineer"]]></title><description><![CDATA[
<p>Gather metrics and regularly report them.</p>
]]></description><pubDate>Fri, 05 Dec 2025 16:04:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46163126</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=46163126</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46163126</guid></item><item><title><![CDATA[New comment by scott_s in "What Killed Perl?"]]></title><description><![CDATA[
<p>Agreed. In grad school, I used Perl to script running my benchmarks, post-process my data and generate pretty graphs for papers. It was all Perl 5 and gnuplot. Once I saw someone do the same thing with Python and matplotlib, I never looked back. I later actually started using Python professionally, as I believe lots of other people had similar epiphanies. And not just from Perl, but from different languages and domains.<p>I think the article's author is implicitly not considering that people who were around when Perl was popular, who were perfectly capable of "understanding" it, actively decided against it.</p>
]]></description><pubDate>Wed, 19 Nov 2025 19:39:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45984078</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=45984078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45984078</guid></item><item><title><![CDATA[New comment by scott_s in "Scientist exposes anti-wind groups as oil-funded, now they want to silence him"]]></title><description><![CDATA[
<p>That's true of all renewable energy sources. So we should take advantage of all of them, as much as is feasible.</p>
]]></description><pubDate>Wed, 27 Aug 2025 13:10:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=45039186</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=45039186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45039186</guid></item><item><title><![CDATA[New comment by scott_s in "Claude Sonnet 4 now supports 1M tokens of context"]]></title><description><![CDATA[
<p>You train on data. Context is also data. If you want a model to have certain data, you can bake it into the model during training, or provide it as context during inference. But if the "context" you want the model to have is big enough, you're going to want to train (or fine-tune) on it.<p>Consider that you're coding a Linux device driver. If you ask for help from an LLM that has never seen the Linux kernel code, has never seen a Linux device driver and has never seen all of the documentation from the Linux kernel, you're going to need to provide all of this as context. And that's both going to be onerous on you, and it might not be feasible. But if the LLM has already seen all of that during training, you don't need to provide it as context. Your context may be as simple as "I am coding a Linux device driver" and show it some of your code.</p>
]]></description><pubDate>Thu, 14 Aug 2025 14:07:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44900568</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=44900568</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44900568</guid></item><item><title><![CDATA[New comment by scott_s in "Claude Sonnet 4 now supports 1M tokens of context"]]></title><description><![CDATA[
<p>Because training one family of models with very large context windows can be offered to the entire world as an online service. That is a very different business model from training or fine-tuning individual models specifically for individual customers. <i>Someone</i> will figure out how to do that at scale, eventually. It might require the cost of training to reduce significantly. But large companies with the resources to do this for themselves will do it, and many are doing it.</p>
]]></description><pubDate>Wed, 13 Aug 2025 18:49:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44892275</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=44892275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44892275</guid></item><item><title><![CDATA[New comment by scott_s in "Claude Sonnet 4 now supports 1M tokens of context"]]></title><description><![CDATA[
<p>> Of course, because I am not new to the problem, whereas an LLM is new to it every new prompt.<p>That is true for the LLMs you have access to now. Now imagine if the LLM had been <i>trained</i> on your entire code base. And not just the code, but the entire commit history, commit messages and also all of your external design docs. <i>And</i> code and docs from all relevant projects. That LLM would not be new to the problem every prompt. Basically, imagine that you fine-tuned an LLM for your specific project. You will eventually have access to such an LLM.</p>
]]></description><pubDate>Wed, 13 Aug 2025 14:33:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44888989</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=44888989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44888989</guid></item><item><title><![CDATA[New comment by scott_s in "My AI skeptic friends are all nuts"]]></title><description><![CDATA[
<p>The tools are at the point now that ignoring them is akin to ignoring Stack Overflow posts. Basically any time you'd google for the answer to something, you might as well ask an AI assistant. It has a good chance of giving you a good answer. And given how programming works, it's usually easy to verify the information. Just like, say, you would do with a Stack Overflow post.</p>
]]></description><pubDate>Wed, 04 Jun 2025 20:30:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44185143</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=44185143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44185143</guid></item><item><title><![CDATA[New comment by scott_s in "Exploiting Undefined Behavior in C/C++ Programs: The Performance Impact [pdf]"]]></title><description><![CDATA[
<p>It's not as obvious a win as you may think. Keep in mind that for every binary that gets deployed and executed, it will be compiled many more times before and after for testing. For some binaries, this number could easily reach the hundreds of thousands of times. Why? In a monorepo, a lot of changes come in every day, and testing those changes involves traversing a reachability graph of potentially affected code and running their tests.</p>
]]></description><pubDate>Sat, 26 Apr 2025 01:43:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=43800170</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=43800170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43800170</guid></item><item><title><![CDATA[New comment by scott_s in "Why is Warner Bros. Discovery putting old movies on YouTube?"]]></title><description><![CDATA[
<p><i>Murder in the First</i> is one of them, and it is a long favorite of mine: <a href="https://www.youtube.com/watch?v=X42yOL5Ah4E&list=PL7Eup7JXScZyvRftA2Q5hv69XiegDm6tQ&index=16" rel="nofollow">https://www.youtube.com/watch?v=X42yOL5Ah4E&list=PL7Eup7JXSc...</a><p>It has the best performance I've ever seen by Kevin Bacon, and a solid performance from Christian Slater. Gary Oldman is a solid villian. R. L. Emery does his usual thing, but he's really good at that usual thing. I think about lines and ideas from it frequently. Granted, this is partly because the movie came out when I was 15 and I watched it a formative age with friends. But I've also watched it recently, and I think it holds up.</p>
]]></description><pubDate>Thu, 06 Feb 2025 15:19:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42963172</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=42963172</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42963172</guid></item><item><title><![CDATA[New comment by scott_s in "C: Simple Defer, Ready to Use"]]></title><description><![CDATA[
<p>Look at the Linux kernel. It uses gotos for exactly this purpose, and it’s some of the cleanest C code you’ll ever read.<p>C++ destructors are great for this, but are not possible in C. Destructors require an object model that C does not have.</p>
]]></description><pubDate>Mon, 06 Jan 2025 21:19:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=42615994</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=42615994</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42615994</guid></item><item><title><![CDATA[New comment by scott_s in "Principles of Educational Programming Language Design"]]></title><description><![CDATA[
<p>I think you overestimate the ability of a tiny community of curators to generate examples to meet the curiosity of students.</p>
]]></description><pubDate>Tue, 17 Dec 2024 14:55:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42441820</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=42441820</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42441820</guid></item><item><title><![CDATA[New comment by scott_s in "Principles of Educational Programming Language Design"]]></title><description><![CDATA[
<p>My first program was indeed C++. In 1998, my high school had a computer lab setup with Turbo C++, and I took a non-AP computer science class. In college, starting in 1999, after entering as a computer science major, we were guided to use Visual C++ on Windows. We got Visual C++ from our department - I can't remember if we paid or if it was just provided to us.</p>
]]></description><pubDate>Mon, 16 Dec 2024 23:00:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=42436463</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=42436463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42436463</guid></item><item><title><![CDATA[New comment by scott_s in "Principles of Educational Programming Language Design"]]></title><description><![CDATA[
<p>I actually think Scratch is fine for 10ish year olds, mainly because all of my above holds true: scratch.mit.edu is an online community where kids can copy, tweak and in general be inspired by and learn from what other kids have done. Your universe can expand with your curiosity. When my nephew was 10, he started with Scratch. My brother guided him towards using Python on a Raspberry Pi soon after.<p>For kids around 10, I think it's all about what the kid thinks is more fun.</p>
]]></description><pubDate>Mon, 16 Dec 2024 22:48:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=42436365</link><dc:creator>scott_s</dc:creator><comments>https://news.ycombinator.com/item?id=42436365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42436365</guid></item></channel></rss>