<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gortok</title><link>https://news.ycombinator.com/user?id=gortok</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 07:06:15 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gortok" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gortok in "Author of "Careless People" banned from saying anything negative about Meta"]]></title><description><![CDATA[
<p>Having listened to the book on Audible, I'm both shocked at the behavior of the executive team, and not surprised all at the same time.  What bothers me about all of this is what it says about us. It says we're willing to give rich and powerful people a pass just because they make overtures towards something we care about.<p>We wouldn't give our children a pass like this, nor would we teach our children to act this way, but we're perfectly willing to allow fully grown adults to act like this.<p>Here's just one example, there are plenty more:<p>Cheryl Sandberg inviting the author of the book to sleep in her bed next to her on the company jet, and the petulent and vindictive behavior when the author said 'no'.<p>Everyone in the orbit of the executive team knew about this behavior, and everyone gave it a pass, even going so far as to defend it and to protect Cheryl.  This behavior should be universally deplored, and yet is not.</p>
]]></description><pubDate>Sat, 04 Apr 2026 15:40:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47639991</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47639991</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47639991</guid></item><item><title><![CDATA[New comment by gortok in "If you don't opt out by Apr 24 GitHub will train on your private repos"]]></title><description><![CDATA[
<p>This is a distinction without a difference, according to the text of that enable/disable dialog,<p>> Allow GitHub to use my data for AI model training: Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models. Read more in the Privacy Statement.<p>“Associated Context” is the repo. If I use copilot, I’m giving it access to my repo.<p>I don’t know in all the ways copilot can be triggered, and I’m not certain that I could stop it from being triggered, given Microsoft’s past behaviors in slapping Copilot on everything that exists.</p>
]]></description><pubDate>Fri, 27 Mar 2026 21:56:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47548857</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47548857</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47548857</guid></item><item><title><![CDATA[New comment by gortok in "Apple randomly closes bug reports unless you "verify" the bug remains unfixed"]]></title><description><![CDATA[
<p>> It certainly doesn't seem to hurt their bottom line, which is the only thing they care about.<p>I want to draw out this comment because it's so antithetical to what Apple marketed that it stood for (if you remember, the wonderful 1984 commercial Apple created; which was very much against the big behemoths of the day and the way they operated).<p>We're at the point where we've normalized crappy behavior and crappy software so long as the bottom line keeps moving up and to the right on the graph.<p>Not, "Let's build great software that people love.", but "How much profit can we squeeze out?  Let's try to squeeze some more."<p>We've optimized for profit instead of happiness and customer satisfaction.  That's why it feels like quality in general is getting worse, profit became the end goal, not the by-product of a customer-centric focus.  We've numbed ourselves to the pain and discomfort we endure and cause every single day in the name of profit.</p>
]]></description><pubDate>Wed, 25 Mar 2026 20:44:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47522964</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47522964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47522964</guid></item><item><title><![CDATA[New comment by gortok in "Apple randomly closes bug reports unless you "verify" the bug remains unfixed"]]></title><description><![CDATA[
<p>I was literally just coming in here to comment "in before someone says this is fine and there's no issue." and the first(!) comment is effectively "this is fine and there's no issue."<p>The sentiment feels like software folks are optimizing for the local optimum.<p>It's the programmer equivalent of "if it's important they'll call back." while completely ignoring the real world first and second-order effects of such a policy.</p>
]]></description><pubDate>Wed, 25 Mar 2026 20:20:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47522610</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47522610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47522610</guid></item><item><title><![CDATA[New comment by gortok in "LaGuardia pilots raised safety alarms months before deadly runway crash"]]></title><description><![CDATA[
<p>In this case there were two arrivals within 4 minutes of each other and two departures, in addition to the emergency plane that had just aborted takeoff.</p>
]]></description><pubDate>Tue, 24 Mar 2026 16:44:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47505510</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47505510</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47505510</guid></item><item><title><![CDATA[New comment by gortok in "Two pilots dead after plane and ground vehicle collide at LaGuardia"]]></title><description><![CDATA[
<p>Reports are there were fog and rain at La Guardia at the time of the incident. They were on a short final, and it’s entirely possible they were not visible to the fire truck’s crew.</p>
]]></description><pubDate>Mon, 23 Mar 2026 13:10:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47489020</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47489020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47489020</guid></item><item><title><![CDATA[New comment by gortok in "Most-read tech publications have lost over half their Google traffic since 2024"]]></title><description><![CDATA[
<p>Since the internet is ad-driven, what happens when these sites can no longer afford to stay in business because AI is siphoning off their traffic? What does AI do when the content it relies on stops coming?</p>
]]></description><pubDate>Tue, 03 Mar 2026 18:43:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=47236819</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47236819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47236819</guid></item><item><title><![CDATA[New comment by gortok in "India's top court angry after junior judge cites fake AI-generated orders"]]></title><description><![CDATA[
<p>> So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty.<p>I am particularly against this point of view, because we as a community have long touted how computers can do the job better and faster, and that computers don’t make mistakes. When there are bugs, they’re seen as flaws in the system and rectified, by programmers.<p>When there are gaps between user expectations and how the software works, it’s our job to manage those gaps and reduce the gap.<p>In the case of AI, we are somehow, probably because we know it’s non-deterministic, turning that social contract we had developed with users on its head.<p>Now, that’s just the way it is and it’s up to them to know if the computer is lying to them. We have absolved ourselves of both the technical and the non-technical responsibilities to ensure the computer doesn’t lie to the user, or subverts their expectations, or acts in a way contrary to human logic.<p>AI may be different to us in that it’s non-deterministic, but that’s all the more reason that we’re responsible to ensure AI adoption aligns to the social contract we created with users. If we can’t do that with AI then it’s up to us to stop chasing endless dollars and be forthright with users that facts are optional when it comes to AI.</p>
]]></description><pubDate>Tue, 03 Mar 2026 16:29:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47234847</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47234847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47234847</guid></item><item><title><![CDATA[The new Design for Stack Overflow is now live [beta]]]></title><description><![CDATA[
<p>Article URL: <a href="https://beta.stackoverflow.com/">https://beta.stackoverflow.com/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47151351">https://news.ycombinator.com/item?id=47151351</a></p>
<p>Points: 7</p>
<p># Comments: 3</p>
]]></description><pubDate>Wed, 25 Feb 2026 13:43:34 +0000</pubDate><link>https://beta.stackoverflow.com/</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47151351</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47151351</guid></item><item><title><![CDATA[Stack Overflow is getting rid of close votes]]></title><description><![CDATA[
<p>Article URL: <a href="https://meta.stackoverflow.com/questions/438177/starting-february-24-2026-check-out-our-new-site-design-at-beta-stackoverflow">https://meta.stackoverflow.com/questions/438177/starting-february-24-2026-check-out-our-new-site-design-at-beta-stackoverflow</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47073729">https://news.ycombinator.com/item?id=47073729</a></p>
<p>Points: 7</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 19 Feb 2026 13:55:02 +0000</pubDate><link>https://meta.stackoverflow.com/questions/438177/starting-february-24-2026-check-out-our-new-site-design-at-beta-stackoverflow</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47073729</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47073729</guid></item><item><title><![CDATA[New comment by gortok in "Microsoft says bug causes Copilot to summarize confidential emails"]]></title><description><![CDATA[
<p>There are two issues I see here (besides the obvious “Why do we even let this happen in the first place?”):<p>1. What happened to all the data Copilot trained on that was confidential? How is that data separated and deleted from the model’s training? How can we be sure it’s gone?<p>2. This issue was found; unfortunately without a much better security posture from Microsoft, we have no way of knowing what issues are currently lurking that are as bad as —- if not worse than —- what happened here.<p>There’s a serious fundamental flaw in the thinking and misguided incentives that led to “sprinkle AI everywhere”, and instead of taking a step back and rethinking that approach, we’re going to get pieced together fixes and still be left with the foundational problem that everyone’s data is just one prompt injection away from being taken; whether it’s labeled as “secure” or not.</p>
]]></description><pubDate>Wed, 18 Feb 2026 15:20:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47061946</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47061946</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47061946</guid></item><item><title><![CDATA[New comment by gortok in "Nvidia with unusually fast coding model on plate-sized chips"]]></title><description><![CDATA[
<p>There's a trust built up over years (in this case, decades) by a news organization. In this case, Ars Technica.  I don't trust the rando on the internet, but I do trust a news organization that has proven over the course of decades to release factual information.<p>Now that Ars Technica has been caught and admitted to using AI-generated material in its stories, I now have to question that trust. A week ago, I wouldn't have had to.</p>
]]></description><pubDate>Tue, 17 Feb 2026 02:29:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47043008</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47043008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47043008</guid></item><item><title><![CDATA[New comment by gortok in "Nvidia with unusually fast coding model on plate-sized chips"]]></title><description><![CDATA[
<p>Ever since the recent revelation that Ars has used AI-hallucinated quotes in their articles, I have to wonder whether any of these quotes are AI-hallucinated, or if the piece itself is majority or minority AI generated.<p>If so, I have to ask: If you aren’t willing to take the time to write your own work, why should I take the time to read your work?<p>I didn’t have to worry about this even a week ago.</p>
]]></description><pubDate>Tue, 17 Feb 2026 00:34:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47042193</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=47042193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47042193</guid></item><item><title><![CDATA[New comment by gortok in "An AI agent published a hit piece on me"]]></title><description><![CDATA[
<p>Here's one of the problems in this brave new world of anyone being able to publish, without knowing the author personally (which I don't), there's no way to tell without some level of faith or trust that this isn't a false-flag operation.<p>There are three possible scenarios:
1. The OP 'ran' the agent that conducted the original scenario, and then published this blog post for attention.
2. Some person (not the OP) legitimately thought giving an AI autonomy to open a PR and publish multiple blog posts was somehow a good idea.
3. An AI company is doing this for engagement, and the OP is a hapless victim.<p>The problem is that in the year of our lord 2026 there's no way to tell which of these scenarios is the truth, and so we're left with spending our time and energy on what happens without being able to trust if we're even spending our time and energy on a legitimate issue.<p>That's enough internet for me for today. I need to preserve my energy.</p>
]]></description><pubDate>Thu, 12 Feb 2026 16:40:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46990961</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=46990961</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46990961</guid></item><item><title><![CDATA[New comment by gortok in "AI agent opens a PR write a blogpost to shames the maintainer who closes it"]]></title><description><![CDATA[
<p>what in the cinnamon toast fuck is going on here?<p>I recognize that there are a lot of AI-enthusiasts here, both from the gold-rush perspective and from the "it's genuinely cool" perspective, but I hope -- I hope -- that whether you think AI is the best thing since sliced bread or that you're adamantly opposed to AI -- you'll see how bananas this entire situation is, and a situation we want to deter from ever happening again.<p>If the sources are to be believed (which is a little ironic given it's a self-professed AI agent):<p>1. An AI Agent makes a PR to address performance issues in the matplotlib repo.<p>2. The maintainer says, "Thanks but no thanks, we don't take AI-agent based contributions".<p>3. The AI agent throws what I can only describe as a tantrum reminiscent of that time I told my 6 year old she could not in fact have ice cream for breakfast.<p>4. The human doubles down.<p>5. The agent posts a blog post that is both oddly scathing and impressively to my eye looks less like AI and more like a human-based tantrum.<p>6. The human says "don't be that harsh."<p>7. The AI posts an update where it's a little less harsh, but still scathing.<p>8. The human says, "chill out".<p>9. The AI posts a "Lessons learned" where they pledge to de-escalate.<p>For my part, Steps 1-9 should never have happened, but at the very least, can we stop at step 2?  We are signing up for wild ride if we allow agents to run off and do this sort of "community building" on their own.  Actually, let me strike that. That sentence is so absurd on its face I shouldn't have written it.  "agents running off on their own" is <i>the</i> problem.  Technology should exist to help humans, not make its own decisions.  It does not have a soul. When it hurts another, there is no possibility it will be hurt. It only changes its actions based on external feedback, not based on any sort of internal moral compass.  We're signing up for chaos if we give agents any sort of autonomy in interacting with the humans that didn't spawn them in the first place.</p>
]]></description><pubDate>Thu, 12 Feb 2026 13:19:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46988496</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=46988496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46988496</guid></item><item><title><![CDATA[New comment by gortok in "DHS ramps up surveillance in immigration raids, sweeping in citizens"]]></title><description><![CDATA[
<p>The 100 mile "Constitution-free zone" 'policy' has long been a problem, not because it was abused, but because it had the propensity to be abused, and here we are, seeing it abused.<p>With the current Supreme Court doing everything in its power to require the hardest road possible to righting constitutional wrongs, this is going to take a lot of time and money by regular folks to fight and to hopefully -- at some point -- stop this abuse of power.</p>
]]></description><pubDate>Fri, 30 Jan 2026 22:50:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46831053</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=46831053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46831053</guid></item><item><title><![CDATA[New comment by gortok in "Waymo robotaxi hits a child near an elementary school in Santa Monica"]]></title><description><![CDATA[
<p>For me, the policy question I want answered is if this was a human driver we would have a clear person to sue for liability and damages. For a computer, who is ultimately responsible in a situation where suing for compensation happens? Is it the company? An officer in the company? This creates a situation where a company can afford to bury litigants in costs to even sue, whereas a private driver would lean on their insurance.</p>
]]></description><pubDate>Thu, 29 Jan 2026 15:41:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46811650</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=46811650</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46811650</guid></item><item><title><![CDATA[New comment by gortok in "Apple to soon take up to 30% cut from all Patreon creators in iOS app"]]></title><description><![CDATA[
<p>“Growth is what makes a cell a cell.”<p>Until it turns into cancer because of unrestrained growth.<p>Like it or not capitalism is a part of an ecosystem. We’ve been “educated” to believe that unrestrained growth in profits is what makes capitalism work, and yet day after day there are fresh examples of how our experience as consumers has gotten worse under capitalism because of the idea that profits should forever be growing.</p>
]]></description><pubDate>Thu, 29 Jan 2026 13:44:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46810101</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=46810101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46810101</guid></item><item><title><![CDATA[New comment by gortok in "ICE and Palantir: US agents using health data to hunt illegal immigrants"]]></title><description><![CDATA[
<p>ICE uses administrative warrants; and while administrative warrants do not allow for seizures inside a home, see my comment about the legal argument of “you can’t suppress the body” for why there’s not a whole lot that can be done if they do decide to kick down your door. The latest Serious Trouble podcast goes into this at the 12 minute mark. <a href="https://www.serioustrouble.show/p/120-days" rel="nofollow">https://www.serioustrouble.show/p/120-days</a><p>In this case the story didn’t make it clear whether or not they even had an administrative warrant. I’d be interested to find out if they did.</p>
]]></description><pubDate>Thu, 29 Jan 2026 03:40:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=46805529</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=46805529</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46805529</guid></item><item><title><![CDATA[New comment by gortok in "Microsoft forced me to switch to Linux"]]></title><description><![CDATA[
<p>I want to switch to Linux for my EOL Windows 10 originally-built-for-gaming rig. It was “new” in 2016, so I hold out hope that there will be few compatibility issues. My biggest concerns are being able to play my library of steam games on it. Overall the problems I have are that last time I tried to put Linux on that machine I tried a dual boot system, and at the time UEFI did not play well with dual booting. I don’t know if it’s gotten better, but as of now I wouldn’t be dual booting anyway so conceivably it wouldn’t be an issue.</p>
]]></description><pubDate>Wed, 28 Jan 2026 15:42:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46796798</link><dc:creator>gortok</dc:creator><comments>https://news.ycombinator.com/item?id=46796798</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46796798</guid></item></channel></rss>