<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jdcasale</title><link>https://news.ycombinator.com/user?id=jdcasale</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 08 Apr 2026 23:08:31 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jdcasale" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jdcasale in "Talos: Hardware accelerator for deep convolutional neural networks"]]></title><description><![CDATA[
<p>Without weighing in on whether this is true, I'll point out that LLMs could both be better writers than most people and also be bad writers.<p>Writing is a difficult skill that many (most?) educational systems do not effectively teach. Most people are terrible writers.</p>
]]></description><pubDate>Wed, 04 Mar 2026 00:39:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47241382</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=47241382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47241382</guid></item><item><title><![CDATA[New comment by jdcasale in "Ask HN: Claude Opus performance affected by time of day?"]]></title><description><![CDATA[
<p>The math is obvious on this one. It's super well-documented that model performance on complex tasks scales (to some asymptote) with the amount of inference-time compute allocated.<p>LLM providers must dynamically scale inference-time compute based on current load because they have limited compute. Thus it's impossible for traffic spikes _not_ to cause some degradations in model performance (at least until/unless they acquire enough compute to saturate that asymptotic curve for every request under all demand conditions -- it does not seem plausible that they are anywhere close to this)</p>
]]></description><pubDate>Sat, 17 Jan 2026 04:39:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=46655286</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=46655286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46655286</guid></item><item><title><![CDATA[New comment by jdcasale in "French Court Orders Popular VPNs to Block More Pirate Sites, Despite Opposition"]]></title><description><![CDATA[
<p>There have been a host of civil servants purged from a litany of federal services for this reason. You don't have to look very hard to find them. Example: <a href="https://www.npr.org/2025/09/10/g-s1-87947/fbi-lawsuit-firing-retribution" rel="nofollow">https://www.npr.org/2025/09/10/g-s1-87947/fbi-lawsuit-firing...</a>.<p>Another (higher profile) example are the baseless threats of criminal indictments against Jerome Powell -- it is impossible to argue that these threats have been made for any reason other than that he, as a nonpartisan official, defied the president's demands to execute his duties as fed chair in such a way (that is, poorly) so as to put a temporary thumb on the scale for the current admin.<p>The more important question, I think, is how many folk in explicitly nonpartisan functions are choosing not to break step with the current admin for fear of some sort of (likely professional) reprisal. I'm not alleging that they're disappearing dissenters or anything that inflammatory, but it would be intellectually dishonest to contend that there isn't a long, well-documented trail of malfeasance here.</p>
]]></description><pubDate>Thu, 15 Jan 2026 18:18:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46636749</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=46636749</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46636749</guid></item><item><title><![CDATA[New comment by jdcasale in "French Court Orders Popular VPNs to Block More Pirate Sites, Despite Opposition"]]></title><description><![CDATA[
<p>For what it's worth, I have lived in, and currently spend a lot of time in, both places. You're both very obviously wrong.<p>There is a serious problem in the US. There is also a serious (though different) problem in the UK. The problem in the US is the chilling effect of the vindictiveness and lawlessness of the current regime. I will not elaborate on this because it's too complicated to communicate effectively in a forum post.<p>The problem in the UK is a set of vaguely and arbitrarily specified-and-enforced laws that enable the criminalization of 'grossly offensive" speech. There is no statutory definition of what constitutes a 'grossly offensive' communication -- all enforcement is arbitrary and thus can be abused. Whether is it actually abused in any widespread fashion is irrelevant.<p>- Communications Act 2003 (Section 127): Makes it an offense to send messages via public electronic networks (internet, phone, social media) that are "grossly offensive," indecent, obscene, or menacing, or to cause annoyance/anxiety.<p>- Malicious Communications Act 1988 (Section 1): Applies to sending letters or electronic communications with the purpose of causing distress or anxiety, containing indecent or grossly offensive content.</p>
]]></description><pubDate>Thu, 15 Jan 2026 17:15:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46635804</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=46635804</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46635804</guid></item><item><title><![CDATA[New comment by jdcasale in "A.I. researchers are negotiating $250M pay packages"]]></title><description><![CDATA[
<p>Anyone on earth can completely and totally ignore football and it will have zero consequences for their life.<p>The money here (in the AI realm) is coming a handful of oligarchs who are transparently trying to buy control of the future.<p>The difference between the two scenarios is... kinda obvious don't you think?</p>
]]></description><pubDate>Sat, 02 Aug 2025 20:28:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44771125</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=44771125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44771125</guid></item><item><title><![CDATA[New comment by jdcasale in "He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse"]]></title><description><![CDATA[
<p>The sycophancy is obviously intentional. People are vulnerable to it, and addiction is profitable. It has nothing to do with the nature of LLMs and everything to do with user engagement metrics.</p>
]]></description><pubDate>Sun, 20 Jul 2025 21:02:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44629285</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=44629285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44629285</guid></item><item><title><![CDATA[New comment by jdcasale in "C3 solved memory lifetimes with scopes"]]></title><description><![CDATA[
<p>I am also struggling to see the difference between this and language-level support for an arena allocator with RAII.</p>
]]></description><pubDate>Sun, 13 Jul 2025 15:46:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44551248</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=44551248</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44551248</guid></item><item><title><![CDATA[New comment by jdcasale in "Australians to face age checks from search engines"]]></title><description><![CDATA[
<p>I'd keep in mind that internet usage of 96 (I was there) bears no resemblance whatsoever to internet usage of today. The level of predatory sophistication of today's attention economy makes any sort of comparison between the two misguided at best.</p>
]]></description><pubDate>Wed, 02 Jul 2025 11:17:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44442325</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=44442325</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44442325</guid></item><item><title><![CDATA[New comment by jdcasale in "Show HN: Seastar – Build and dependency manager for C/C++ with Cargo's features"]]></title><description><![CDATA[
<p>As opposed to taking like 30 seconds to install cargo and rust?<p>I get that the elegant thing to do would be to bootstrap this, but in practice does this actually cost you anything, or is this a purely aesthetic concern?</p>
]]></description><pubDate>Mon, 16 Jun 2025 03:46:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=44286458</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=44286458</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44286458</guid></item><item><title><![CDATA[New comment by jdcasale in "Are people bad at their jobs or are the jobs just bad?"]]></title><description><![CDATA[
<p>You have misunderstood something here.<p>I (like a very large plurality, maybe even a majority, of devs) do not work for a consulting firm. There is no client.<p>I've done consulting work in the past, though. Any leader who does not take into account (at least to some degree) relative educational value of assignments when staffing projects is invariably a bad leader.<p>All work is training for a junior. In this context, the idea that you can't ethically train a junior "on a client's dime" is exactly equivalent to saying that you can't ever ethically staff juniors on a consulting project -- that's a ridiculous notion. The work is going to get done, but a junior obviously isn't going to be as fast as I am at any task.</p>
]]></description><pubDate>Wed, 09 Apr 2025 04:27:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43628863</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=43628863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43628863</guid></item><item><title><![CDATA[New comment by jdcasale in "Are people bad at their jobs or are the jobs just bad?"]]></title><description><![CDATA[
<p>"If you asked a junior developer to refactor a rust program to be more idiomatic, how long would you expect that to take? Would you expect the work to compile on the first try?"<p>The purpose of giving that task to a junior dev isn't to get the task done, it's to teach them -- I will almost always be at least an order order of magnitude faster than a junior for any given task. I don't expect juniors to be similarly productive to me, I expect them to learn.<p>The parent comment also referred to a 'competent pair programmer', not a junior dev.<p>My point was that for the tasks that I wanted to use the LLM, frequently there was no amount of specificity that could help the model solve it -- I tried for a long time, and generally if the task wasn't obvious to me, the model generally could not solve it. I'd end up in a game of trying to do nondeterministic/fuzzy programming in English instead of just writing some code to solve the problem.<p>Again I agree that there is significant value here, because there is a ton of SWE work that is technically trivial, boring, and just eats up time. It's also super helpful as a natural-language info-lookup interface.</p>
]]></description><pubDate>Thu, 03 Apr 2025 16:10:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=43571813</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=43571813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43571813</guid></item><item><title><![CDATA[New comment by jdcasale in "Are people bad at their jobs or are the jobs just bad?"]]></title><description><![CDATA[
<p>I recently tried Cursor for about a week and I was disappointed. It was useful for generating code that someone else has definitely written before (boilerplate etc), but any time I tried to do something nontrivial, it failed no matter how much poking, prodding, and thoughtful prompting I tried.<p>Even when I tried to ask it for stuff like refactoring a relatively simple rust file to be more idiomatic or organized, it consistently generated code that did not compile and was unable to fix the compile errors on 5 or 6 repromptings.<p>For what it's worth, a lot of SWE work technically trivial -- it makes this much quicker so there's obviously some value there, but if we're comparing it to a pair programmer, I would definitely fire a dev who had this sort of extremely limited complexity ceiling.<p>It really feels to me (just vibes, obviously not scientific) like it is good at interpolating between things in its training set, but is not really able to do anything more than that. Presumably this will get better over time.</p>
]]></description><pubDate>Thu, 03 Apr 2025 00:03:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=43563270</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=43563270</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43563270</guid></item><item><title><![CDATA[New comment by jdcasale in "U.S. national-security leaders included me in a group chat"]]></title><description><![CDATA[
<p>Another excellent point.</p>
]]></description><pubDate>Tue, 25 Mar 2025 20:07:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=43475334</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=43475334</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43475334</guid></item><item><title><![CDATA[New comment by jdcasale in "U.S. national-security leaders included me in a group chat"]]></title><description><![CDATA[
<p>Without commenting on the (important) political or reputational considerations here, I want to talk a bit about the operational risk presented by this practice. There is a somewhat sizable "So what? Signal is e2e encrypted. Nothing bad happened and you're all overreacting." narrative floating around. (not so much in this thread, but in the general discourse)<p>If this operation was planned in Signal, then so were countless others (and presumably so would countless others be in the future).<p>If not for this journalist, this would likely have continued indefinitely. We have high confidence that at least some of the officials were doing this on their personal phones. (Gabbard refused to deny this in the congressional hearing -- it does not stand to reason that she'd do that unless she was, in fact using her personal phone).<p>At some point in the administration, it's likely that at least one of their personal phones will be compromised (Pegasus, etc). E2E encryption isn't much use if the phone itself is compromised. This is why we have SCIFs.<p>There was no operational fallout of this particular screwup, but if this practice were to continue, it's likely certain that an adversary would, at some point, compromise these communications. Not through being accidentally invited to the chat rooms, but through compromise of the participants' hardware. An APT could have advance notice of all manner of confidential and natsec-critical plans.<p>In all likelihood this would lead to failed operations and casualties. The criticism/pushback on this is absolutely justified.</p>
]]></description><pubDate>Tue, 25 Mar 2025 19:38:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=43475003</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=43475003</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43475003</guid></item><item><title><![CDATA[New comment by jdcasale in "It doesn't cost much to improve someone's life"]]></title><description><![CDATA[
<p>This assertion is sharply undercut by the facts. I have an incredibly hard time believing that you're engaging in good faith here.<p>There is literally zero evidence whatsoever that Russia cares about 'equality for ordinary people' and a mountain of conclusive proof that it does not.<p>Ukraine did not owe Russia anything at all, so these 'negotiations' were nothing more than theater. Russia gave Ukraine the choice between either surrendering their sovereignty (for literally zero benefit in exchange) or being invaded. That is not a negotiation, that's state-sponsored terrorism.</p>
]]></description><pubDate>Sat, 15 Mar 2025 14:42:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43372850</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=43372850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43372850</guid></item><item><title><![CDATA[New comment by jdcasale in "They Thought They Were Free: The Germans, 1933-45 (1955)"]]></title><description><![CDATA[
<p>I got a little carried away with this response and it's a little off-topic, but I figured it might be worth posting anyway.<p>I think this has to do with the nonlinear growth in the human-facing complexity of the world over the past 30 years.<p>Humans aren't getting more intelligent (they may not be getting dumber either, but at the very least, the hardware is the same), but the complexity of the world that we have to engage with has undergone accelerating growth for most of my lifetime. The fraction of this complexity that is exposed to 'normal' people has also grown significantly over that period of time with the 24-hour news cycle, social media, mobile internet, etc.<p>It's obvious that at some point in this trend any given person will start running into issues with the world that are above their complexity ceiling. If this event is rare, we shrug it off and move on with our day. If this becomes commonplace, we start to drown in that complexity and desperately cling to sources of perceived clarity, because it's fucking terrifying to be surrounded by a world that you don't understand.<p>The thing that the right has done really well and that the left has generally failed to do in my lifetime is to identify sources of complexity and provide appealing clarity around them. This clarity is necessarily an approximation of the truth, but we NEED simple answers that make the world less scary. People also, as a general rule, don't like to be lectured or told that they are part of the problem -- the right never foists any blame upon the people it's targeting.<p>In my lifetime, the left has pretty consistently fought amongst ourselves over which inaccuracies are allowable or just when we attempt to create simplifying approximations. Instead of providing a unified, simplifying vision for any given topic, the messaging gives several conflicting accounts that make it easy to see the cracks in each argument, and often serve to make the problem worse. If you're competing with another source of information that is simple, clear, and makes people feel good (or at least like they are good), you will always lose if you do not also achieve those three things.<p>In the vacuum created by a lack of simple, blameless, intuitive messaging from an (arguably) well-meaning left-leaning establishment, the intuitive (though generally wrong and often cruel) explanations offered by the right have found huge support and adoption by people who need someone to help them understand the world. Because both messages are approximations of the truth (and thus sources of verifiable inaccuracies) people just choose the one that makes them feel better.<p>tldr I think we've hit a point where:<p>- The world is too complex for many people to independently navigate<p>- People need rely on simplifying approximations of the world<p>- Media provides these approximations, often in bad faith<p>- Sources of credibility or expertise often provide these approximation in good faith, but can't agree on which approximations are the 'right' ones<p>- Good faith messaging often either fails to simplify or makes people feel bad/guilty<p>- People are sick of feeling bad or guilty<p>- People associate expertise with being scolded over things that don't feel fair or fully accurate to them<p>Thus people often reject expertise out of principle, and just believe whatever Fox News tells them because it feels better.<p>ALSO: People who believe the 'right' things are often pretty shitty to people who don't (it goes both ways, but the other direction doesn't matter for this post). I've been guilty of this. This just further galvanizes the association between expertise or the 'right' ideas/people and feelings of resentment/guilt/shame for these folks. They may not understand what you said, but the do understand that you were talking down to them, and they hate you for that.</p>
]]></description><pubDate>Wed, 05 Feb 2025 12:50:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=42947791</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=42947791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42947791</guid></item><item><title><![CDATA[New comment by jdcasale in "They Thought They Were Free: The Germans, 1933-45 (1955)"]]></title><description><![CDATA[
<p>Fwiw, my experience from growing up in deep red America was that anti-intellectualism was staggeringly strong there. People would actually define their beliefs in opposition to those of people they perceived to be 'smart'.<p>The way that I always understood this was that if they had a disagreement with someone 'smarter' than them, and they operated in good faith, they would lose ~98% of the time. This doesn't feel good. It makes smart people threatening -- it breeds resentment toward them.<p>However, if you have a roomful of people who define their position in opposition to the 'smart' person, your beliefs are the ones that matter, regardless of what the truth is, so you get to feel like you've won the argument. Most arguments are not consequential, so this practice doesn't really cause meaningful short-term harm so there's no negative feedback.<p>Over the long-term, this herd mentality is how people learn to navigate the world, and you end up with a giant mess.</p>
]]></description><pubDate>Wed, 05 Feb 2025 11:31:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=42947056</link><dc:creator>jdcasale</dc:creator><comments>https://news.ycombinator.com/item?id=42947056</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42947056</guid></item></channel></rss>