<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: apical_dendrite</title><link>https://news.ycombinator.com/user?id=apical_dendrite</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 22:52:55 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=apical_dendrite" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by apical_dendrite in "Stanford report highlights growing disconnect between AI insiders and everyone"]]></title><description><![CDATA[
<p>When I get a message from a co-worker that seems to have been written by an LLM, I am incredibly turned off and instantly think less of the person. It can be easy to spot: key words bolded, acknowledging that I'm right, longer and with a different tone than their typical messages, with neat bullet points.<p>It feels a little disrespectful. It feels a little pointless (why am I bothering talking to you if I can get the same result from the AI). I have no idea whether you've given the problem any actual thought, or if you're just copy-pasting an answer. I have no idea if you actually believe what you're telling me (or if you've even read it or understand it).</p>
]]></description><pubDate>Mon, 13 Apr 2026 22:50:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47758966</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47758966</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47758966</guid></item><item><title><![CDATA[New comment by apical_dendrite in "OpenAI says its new model GPT-2 is too dangerous to release (2019)"]]></title><description><![CDATA[
<p>I have a lot of trouble understanding the mindset of a person who thinks that what they're building is so dangerous that it must be locked away or it will cause untold harm, but also that they must build it as fast as possible.<p>I can understand it in the context of the Manhattan project, where you're fighting a war for survival. I cannot understand how you can do it as a commercial enterprise.</p>
]]></description><pubDate>Wed, 08 Apr 2026 03:31:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47684783</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47684783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47684783</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Iran-linked hackers breach FBI director's personal email"]]></title><description><![CDATA[
<p>You're missing some key distinctions. The issues are: 1) putting classified information into a non-classified system; 2) putting information that needs to be preserved under laws like the presidential records act into systems where it's set to be auto-deleted. Both are illegal. Simply saying that the Biden administration pre-installed Signal is irrelevant. There are legitimate uses.<p>Your own article makes this exact point:
> Matthew Shoemaker, a former Defense Intelligence Agency analyst who left the agency in 2021, said that while Signal was used during his time in government, “it was almost exclusively restricted to scheduling purposes,” such as letting their boss know that they’ll be late to work because of personal circumstances.
“That’s why Signalgate is all the more staggering — because these senior leaders were doing the exact opposite of what even my most junior intelligence officers knew not to do,” he said.<p>You're doing bullshit partisan whataboutism. "well the democrats did it first".<p>This has nothing to do with adding the wrong contacts. It has to do with putting highly-sensitive material into Signal to circumvent the law around records preservation and as a result creating a situation where it's possible to accidentally add the wrong contact and therefore exposing that information to a journalist.</p>
]]></description><pubDate>Fri, 27 Mar 2026 17:57:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47546075</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47546075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47546075</guid></item><item><title><![CDATA[New comment by apical_dendrite in "The 'paperwork flood': How I drowned a bureaucrat before dinner"]]></title><description><![CDATA[
<p>You'd be surprised how many doctors still rely on a physical fax machine.</p>
]]></description><pubDate>Fri, 27 Mar 2026 17:51:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47545979</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47545979</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47545979</guid></item><item><title><![CDATA[New comment by apical_dendrite in "The 'paperwork flood': How I drowned a bureaucrat before dinner"]]></title><description><![CDATA[
<p>We still deal with doctors who handwrite their progress notes. Fax will be around for a very, very long time.</p>
]]></description><pubDate>Fri, 27 Mar 2026 17:47:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47545926</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47545926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47545926</guid></item><item><title><![CDATA[New comment by apical_dendrite in "The 'paperwork flood': How I drowned a bureaucrat before dinner"]]></title><description><![CDATA[
<p>Amazingly enough, this is actually not true. Many smaller doctors' offices still have a physical fax machine. I work on automation for certain processes in healthcare and a very large proportion of the faxes we receive come from physical fax machines. You can see artifacts on the fax itself and sometimes the cover letter will have a scribbled note.</p>
]]></description><pubDate>Fri, 27 Mar 2026 17:46:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47545907</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47545907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47545907</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Iran-linked hackers breach FBI director's personal email"]]></title><description><![CDATA[
<p>Source?</p>
]]></description><pubDate>Fri, 27 Mar 2026 16:04:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47544522</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47544522</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47544522</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Thoughts on slowing the fuck down"]]></title><description><![CDATA[
<p>Unfortunately, I think the lesson from recent history seems to be that outside of highly-regulated industries, customers and businesses will accept terrible quality as long as it's cheap.</p>
]]></description><pubDate>Wed, 25 Mar 2026 16:31:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47519632</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47519632</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47519632</guid></item><item><title><![CDATA[New comment by apical_dendrite in "The American Healthcare Conundrum"]]></title><description><![CDATA[
<p>Where are you getting that from the link that you shared (which is one specific school)? The link you shared shows a figure of $34k and doesn't show a clear breakdown of administrative vs non-administrative costs. The closest I can see in that link is that $13k/$34k is allocated to central services, but most of that cost goes to things like the school buses and the cafeteria and the security guards, which are direct services to students, not administrative overhead. They just are run at the system level, not the individual school level.</p>
]]></description><pubDate>Tue, 17 Mar 2026 13:15:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47412210</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47412210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47412210</guid></item><item><title><![CDATA[New comment by apical_dendrite in "The American Healthcare Conundrum"]]></title><description><![CDATA[
<p>My understanding is that there are a number of reasons why commercial insurance companies pay more. A big one is that Medicare has enormous pricing power because people on Medicare are a huge segment of the population and also the segment that consumes the most healthcare services. Your local healthcare system can't NOT take Medicare. They're effectively stuck with the reimbursement rates that Medicare sets. On the other hand, healthcare systems have a ton of power in their local markets. A healthcare system can afford to not be in network for a particular insurer, but if that insurer loses access to the biggest healthcare system in a particular market, it can be devastating for them. A major employer is not going to be happy if their executives have to all change doctors because the big local hospital system is no longer in network.</p>
]]></description><pubDate>Tue, 17 Mar 2026 03:20:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47408204</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47408204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47408204</guid></item><item><title><![CDATA[New comment by apical_dendrite in "The American Healthcare Conundrum"]]></title><description><![CDATA[
<p>This isn't really true anymore (if it was ever true). Providers are spending a huge amount of time dealing with prior authorizations and appeals for private insurance.<p>I work in this area and you're right that Medicare can require a huge amount of paperwork from providers. And a hospital will have far more than 2 FTEs for this (it's called Revenue Cycle Management).</p>
]]></description><pubDate>Tue, 17 Mar 2026 02:55:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47408036</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47408036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47408036</guid></item><item><title><![CDATA[New comment by apical_dendrite in "The American Healthcare Conundrum"]]></title><description><![CDATA[
<p>There's another reason. The harder you make it for a provider to get reimbursed for a service (in order to cut down on fraud), the more difficult it is for legitimate patients to access that service. Medicare patients are elderly. Many of them aren't able to chase after doctors to get the services they need.</p>
]]></description><pubDate>Tue, 17 Mar 2026 02:51:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47407999</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47407999</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47407999</guid></item><item><title><![CDATA[New comment by apical_dendrite in "The American Healthcare Conundrum"]]></title><description><![CDATA[
<p>I'm working on a project in an area of healthcare where there was massive Medicare fraud decades ago. Medicare now requires extensive documentation for each claim and the paperwork is so onerous that providers have exited the market and it's very, very difficult to access care.</p>
]]></description><pubDate>Tue, 17 Mar 2026 02:48:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47407983</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47407983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47407983</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Palestinian boy, 12, describes how Israeli forces killed his family in car"]]></title><description><![CDATA[
<p>David Simon and others have written extensively for decades about the problems with the Baltimore Police Department, and other departments around the country. They trace these problems back to the war on drugs and other purely American factors.<p>The Amnesty article that you're citing is a post hoc ergo propter hoc fallacy. The Baltimore Police Department did not need to learn about constitutional violations from the Israelis.</p>
]]></description><pubDate>Mon, 16 Mar 2026 21:12:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47404952</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47404952</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47404952</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Another DOGE staffer explaining how he flagged grants at NEH for "DEI""]]></title><description><![CDATA[
<p>What DOGE was doing here effectively erased any non-white person from history. It goes way beyond rolling back "DEI". Essentially they were saying that a project on an incident in history where the participants were white was OK, but a project on a similar incident in history where the participants were black or female or Jewish is not OK because it's "DEI". So for instance, a grant to study labor history through the lens of white coal miners would be OK, but a grant to study labor history through the lens of female Jewish garment workers would get canceled.</p>
]]></description><pubDate>Thu, 12 Mar 2026 16:31:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=47353388</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47353388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47353388</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Verification debt: the hidden cost of AI-generated code"]]></title><description><![CDATA[
<p>Excellent questions.</p>
]]></description><pubDate>Sat, 07 Mar 2026 18:55:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47290403</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47290403</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47290403</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Verification debt: the hidden cost of AI-generated code"]]></title><description><![CDATA[
<p>Yes, because he can't answer basic questions about the code.<p>He was hired because we needed a contractor quickly and he and his company represented to us that he was a lot more experienced than he actually is.</p>
]]></description><pubDate>Sat, 07 Mar 2026 18:31:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47290189</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47290189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47290189</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Verification debt: the hidden cost of AI-generated code"]]></title><description><![CDATA[
<p>I used gemini to look up a relative with a connection to a famous event. The relative himself is obscure, but I have some of his writings and I've heard his story from other relatives. Gemini fabricated a completely false narrative about my relative that was much more exciting than what actually happened. I spent a bunch of time looking at the sources that Gemini supplied trying to verify things and although the sources were real, the story Gemini came up with was completely made up.</p>
]]></description><pubDate>Sat, 07 Mar 2026 18:20:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47290081</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47290081</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47290081</guid></item><item><title><![CDATA[New comment by apical_dendrite in "Verification debt: the hidden cost of AI-generated code"]]></title><description><![CDATA[
<p>My company recently hired a contractor. He submits multi-thousand line PRs every day, far faster than I can review them. This would maybe be OK if I could trust his output, but I can't. When I ask him really basic questions about the system, he either doesn't know or he gets it wrong. This week, I asked for some simple scripts that would let someone load data in a a local or staging environment, so that the system could be tested in various configurations. He submitted a PR with 3800 lines of shell scripts. We do not have any significant shell scripts anywhere else in our codebase. I spent several hours reviewing it with him - maybe more time than he spent writing it. His PR had tons and tons of end-to-end tests of the system that didn't actually test anything - some said they were validating state, but passed if a get request returned a 200. There were a few tests that called a create API. The tests would pass if the API returned an ID of the created object. But they would ALSO pass if the test didn't return an ID. I was trying to be a good teacher, so I kept asking questions like "why did you make this decision", etc, to try to have a conversation about the design choices and it was very clear that he was just making up bullshit rationalizations - he hadn't made any decisions at all. There was one particularly nonsensical test suite - it said it was testing X but included API calls that had nothing to do with X. I was trying to figure out how he had come up with that, and then I realized - I had given him a Postman export with some example API requests, and in one of the API requests I had gotten lazy and modified the request to test something but hadn't modified the name in Postman. So the LLM had assumed that the request was related to the old name and used it when generating a test suite, even though these things had nothing to do with each other. He had probably never actually read the output so he had no idea that it made no sense.<p>When he was first hired, I asked him to refactor a core part of the system to improve code quality (get rid of previous LLM slop). He submitted a 2000+ line PR within a day or so. He's getting frustrated because I haven't reviewed it and he has other 2000+ line PRs waiting on review. I asked him some questions about how this part of the system was invoked and how it returned data to the rest of the system, and he couldn't answer. At that point I tried to explain why I am reluctant to let him commit his refactor of a core part of the system when he can't even explain the basic functionality of that component.</p>
]]></description><pubDate>Sat, 07 Mar 2026 18:01:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47289922</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47289922</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47289922</guid></item><item><title><![CDATA[New comment by apical_dendrite in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>The volume is different. Someone submitted a PR this week that was 3800 lines of shell script. Most of it was crap and none of it should have been in shell script. He's submitting PRs with thousands of lines of code every day. He has no idea how any of it actually works, and it completely overwhelms my ability to review.<p>Sure, he could have submitted a ill-considered 3800 line PR five years ago, but it would have taken him at least a week and there probably would have been opportunities to submit smaller chunks along the way or discuss the approach.</p>
]]></description><pubDate>Sat, 07 Mar 2026 15:49:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47288673</link><dc:creator>apical_dendrite</dc:creator><comments>https://news.ycombinator.com/item?id=47288673</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47288673</guid></item></channel></rss>