<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: BoxFour</title><link>https://news.ycombinator.com/user?id=BoxFour</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 10:18:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=BoxFour" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by BoxFour in "A rogue AI led to a serious security incident at Meta"]]></title><description><![CDATA[
<p>“Heads I win, tails you lose” as a business concept has been written about quite a bit.<p>“The Gervais Principle” is an oft-cited one.</p>
]]></description><pubDate>Thu, 19 Mar 2026 22:39:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47447409</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47447409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47447409</guid></item><item><title><![CDATA[New comment by BoxFour in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>> There was no contract, the government wanted to have a contract where they'd be able to use the tool to violate privacy rights of its citizens and issue kill orders without a human present and the company said no.<p>So the contract process worked. The seller wanted certain clauses, the buyer rejected them, and the deal didn’t happen.<p>Setting aside the supply chain risk designation, which I already said was an extreme overreaction, this is basically how it’s supposed to work.<p>> The government shouldn't be able to coerce a business to do whatever it wants.<p>Governments coerce businesses all the time to do what the government wants. Taxes are the obvious example, but there are many others like OFAC sanctions lists or even just regular old business regulations.<p>It mostly works because we rely on governments to use that power wisely, and to use it in a way that represents the wishes of the populace. Clearly that assumption is being tested with the current administration and especially in this particular situation, but the government coerces businesses to do what they want <i>all the time</i> and we often see it as a good thing.</p>
]]></description><pubDate>Mon, 09 Mar 2026 00:21:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47303209</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47303209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47303209</guid></item><item><title><![CDATA[New comment by BoxFour in "AI doesn't replace white collar work"]]></title><description><![CDATA[
<p>It’s pretty obvious that professional photography takes real skill if you’ve ever experienced the disaster of someone trying to cut costs by hiring an amateur photographer for a major event like a wedding but expecting professional results (the mismatch in expectations being the key cause of disaster here).<p>I’m not saying it turns out bad 100% of the time, but it’s easy to forget because good professionals make it look effortless. When the skill isn’t there, though, and you're used to only seeing professional photos it becomes very obvious (and again, that's perfectly fine if you're not expecting professional photography).</p>
]]></description><pubDate>Sun, 08 Mar 2026 19:51:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300642</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47300642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300642</guid></item><item><title><![CDATA[New comment by BoxFour in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>They don’t even need to do that. Elon is almost certainly pushing Grok as hard as possible right now to them, and it’s not like this administration is especially concerned with running a fair procurement process.<p>So it’s probably some mix of two things:<p>1) A punitive “bend the knee us or we’ll destroy you,” which fits their track record.<p>2) Skepticism that Grok is actually as strong as the benchmarks suggest, which is also a pretty reasonable possibility.</p>
]]></description><pubDate>Sun, 08 Mar 2026 19:45:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300555</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47300555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300555</guid></item><item><title><![CDATA[New comment by BoxFour in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>Sure, I said as such:<p>> Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.</p>
]]></description><pubDate>Sun, 08 Mar 2026 19:10:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47300121</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47300121</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47300121</guid></item><item><title><![CDATA[New comment by BoxFour in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>Sure, that’s why I said "on its face." This administration is obviously very different than most.<p>I don’t think Anthropic is wrong to include that clause with this particular administration, and I doubt the administration is internally framing the issue the way I did rather than defaulting to simple authoritarian instincts.<p>But a more reasonable administration could raise the same concern, and I think I would agree with them.</p>
]]></description><pubDate>Sun, 08 Mar 2026 18:36:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299782</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47299782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299782</guid></item><item><title><![CDATA[New comment by BoxFour in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>> It’s not obvious that the government should have to power to overwrite this<p>The government shouldn’t be able to set the terms of its contracts with private companies and walk away if those terms aren’t acceptable? That seems like a stretch.<p>The constitution is a wildly different premise from government contracting with private companies.</p>
]]></description><pubDate>Sun, 08 Mar 2026 18:33:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299754</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47299754</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299754</guid></item><item><title><![CDATA[New comment by BoxFour in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>> the government will come knocking anyway.<p>Dario has even said something along these lines at one point: As the technology matures, it’s very possible the government either nationalizes or semi-nationalizes companies like Anthropic.<p>That doesn’t seem out of the realm of possibility if they can’t land on a relationship similar to existing defense contractors like Raytheon, where these kinds of discussions obviously don't seem to happen.</p>
]]></description><pubDate>Sun, 08 Mar 2026 18:26:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299669</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47299669</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299669</guid></item><item><title><![CDATA[New comment by BoxFour in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>I don't think it's about lethal autonomy specifically as much as it's just about government autonomy period. They don’t think private companies should have any veto power over how the government uses some technology they're provided.<p>On its face that’s not a crazy stance: Governments are meant to represent the public, while private companies obviously aren't. I think it’s somewhat understandable why the government might reject that kind of "we know better than you" type of clause.<p>Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.</p>
]]></description><pubDate>Sun, 08 Mar 2026 18:06:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299489</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47299489</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299489</guid></item><item><title><![CDATA[New comment by BoxFour in "How Big Diaper absorbs billions of extra dollars from American parents"]]></title><description><![CDATA[
<p>> Modern American middle class norms are that babies cost a fortune.<p>> The best nutrition, daycare, early childhood learning, classes, tuition, etc. Extreme expenses.<p>Daytime childcare isn’t really optional in the first few years. Our system assumes both parents will work. In many metro areas childcare alone can take close to half of a median household income, and daycares are even pretty notoriously low-margin businesses. Moving somewhere cheaper is possible, but that often means fewer job opportunities and lower earning potential.<p>The result is a tough set of choices for Americans looking to have children:<p>1) Earn well above the median so childcare costs are manageable (obviously not an option for everyone)<p>2) Accept a massive drop in your standard of living after having kids, possibly to the point of impoverishment<p>3) Decide not to have children<p>Maybe that’s the part of the system worth questioning first.<p>EDIT: And yes, I know some countries or municipalities try to address this. No, it’s not the only reason birth rates are low in America. Communal support for childcare is one, but not the only, necessary component of a growing society.</p>
]]></description><pubDate>Sun, 08 Mar 2026 15:00:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47297871</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47297871</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47297871</guid></item><item><title><![CDATA[New comment by BoxFour in "Show HN: Poppy – A simple app to stay intentional with relationships"]]></title><description><![CDATA[
<p>> The copy is not the product<p>I think this is an illuminative statement from you, so I’m going to just explain that I (and many of the people responding to you likely) will vehemently disagree: The copy is an extremely important part of a product like this.<p>Unlikely we’ll see eye-to-eye on this which is fine, but I would encourage you to do some reflection on that. I’ll certainly be reflecting on what might’ve led to you your position as well.</p>
]]></description><pubDate>Thu, 05 Mar 2026 16:52:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47264009</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47264009</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47264009</guid></item><item><title><![CDATA[New comment by BoxFour in "Show HN: Poppy – A simple app to stay intentional with relationships"]]></title><description><![CDATA[
<p>I'll bite:<p>Copy can signal that a real person spent time on the details and cared about the product. Auto-correct and speech-to-text still carry that idea.<p>Even boring corporate PR language communicates something. It says the company wants to project stability and predictability, which can be reassuring. Slightly awkward, unpolished copy also sends a signal. It suggests a person speaking directly off-the-cuff rather than polished corporate messaging, which some people prefer.<p>LLM-generated copy sends a signal too, and not always a good one. To me, it often suggests the author didn’t care enough to think carefully about the message - not even enough to edit something that came out of an LLM.<p>At that point it starts to feel like someone just prompted Claude to build a reminders app with no care or thought put into it, which I could do myself if I find this idea valuable at all and even personalize the hell out of it. Maybe that's an unfair first impression! But it's not a crazy one given how quickly the cost of code is approaching 0.</p>
]]></description><pubDate>Thu, 05 Mar 2026 13:32:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47261382</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47261382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47261382</guid></item><item><title><![CDATA[New comment by BoxFour in "I built Timeframe, our family e-paper dashboard"]]></title><description><![CDATA[
<p>Doesn’t sound that crazy to me.<p>With many newborns there’s a lot of "hurry up and wait" if you’re able to be on parental leave (or just on the weekends). They sleep a ton. Honestly, I’ve never been more caught up on movies or TV than during those overnight newborn shifts with my first child.<p>The real hectic "I have no free time" era, at least for me, came after that: Toddlers require (and you should try to appreciate giving them) all your available time.</p>
]]></description><pubDate>Mon, 23 Feb 2026 14:20:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47122688</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47122688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47122688</guid></item><item><title><![CDATA[New comment by BoxFour in "Uncovering insiders and alpha on Polymarket with AI"]]></title><description><![CDATA[
<p>> neither does the law<p>I don’t think that’s right. Prediction markets fall under CFTC oversight, and the CFTC absolutely has insider trading rules. We just haven’t seen any enforcement yet. Partly because the space is still new, and partly because enforcement priorities have been uneven lately (to put it mildly).<p>The CFTC has already signaled it’s starting to look more closely at insider trading in prediction markets. It's almost certainly just a matter of time. It's pretty likely a future administration will clamp down on this, if the current one doesn't.</p>
]]></description><pubDate>Sat, 21 Feb 2026 13:46:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47100810</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47100810</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47100810</guid></item><item><title><![CDATA[New comment by BoxFour in "Uncovering insiders and alpha on Polymarket with AI"]]></title><description><![CDATA[
<p>> No vigilant insider is making a series of "single market predictions with high accuracy" on the same account.<p>There seem to be quite a few non-vigilant insiders. That's the very premise of the post we're discussing.<p>This is unsurprising to anyone who's seen the various ways people get busted for insider trading in equities.</p>
]]></description><pubDate>Sat, 21 Feb 2026 13:38:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47100729</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47100729</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47100729</guid></item><item><title><![CDATA[New comment by BoxFour in "Every company building your AI assistant is now an ad company"]]></title><description><![CDATA[
<p>It’s definitely a strange pitch, because the target audience (the privacy-conscious crowd) is exactly the type who will immediately spot all the issues you just mentioned. It's difficult to think of any privacy-conscious individual who wouldn't want, at bare minimum, a wake word (and more likely just wouldn't use anything like this period).<p>The non privacy-conscious will just use Google/etc.</p>
]]></description><pubDate>Fri, 20 Feb 2026 23:23:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095478</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47095478</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095478</guid></item><item><title><![CDATA[New comment by BoxFour in "Every company building your AI assistant is now an ad company"]]></title><description><![CDATA[
<p>> Passively listening ambient audio is being treated as something that doesn't need active consent<p>That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.<p>Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.</p>
]]></description><pubDate>Fri, 20 Feb 2026 23:15:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47095399</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47095399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47095399</guid></item><item><title><![CDATA[New comment by BoxFour in "Every company building your AI assistant is now an ad company"]]></title><description><![CDATA[
<p>This strikes me as a pretty weak rationalization for "safe" always-on assistants. Even if the model runs locally, there’s still a serious privacy issue: Unwitting victims of something recording everything they said.<p>Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.<p>Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup <i>ever</i> sits well with people who value their privacy.</p>
]]></description><pubDate>Fri, 20 Feb 2026 22:38:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47094977</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47094977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47094977</guid></item><item><title><![CDATA[New comment by BoxFour in ""Token anxiety", a slot machine by any other name"]]></title><description><![CDATA[
<p>It would never show up as some explicit rule or document. It just sort of happens when a few things line up: execs start off-handedly praising 996, stack ranking is still a thing, and the job market is bad enough that getting fired feels genuinely dangerous.<p>It starts with people who feel they’ve got more to lose (like those supporting a family) working extra to avoid looking like a low performer, whether that fear is reasonable or not. People aren’t perfectly rational, and job-loss anxiety makes them push harder than they otherwise would. Especially now, when "pushing harder" might just mean sending chat messages to claude during your personal time.<p>Totally anecdotal (strike 1), and I'm at a FAANG which is definitely not the median tech job (strike 2), but it’s become pretty normal for me to come back Monday to a pile of messages sent by peers over the weekend. A couple years ago even that was extremely unusual; even if people were working on the weekend they at least kept up a facade that they weren't.</p>
]]></description><pubDate>Mon, 16 Feb 2026 22:06:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=47040977</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47040977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47040977</guid></item><item><title><![CDATA[New comment by BoxFour in ""Token anxiety", a slot machine by any other name"]]></title><description><![CDATA[
<p>I wish the author had stuck to the salient point about work/life balance instead of drifting into the gambling tangent, because the core message is actually more unsettling. With the tech job market being rough and AI tools making it so frictionless to produce real output, the line between work time and personal time is basically disappearing.<p>To the bluesky poster's point: Pulling out a laptop at a party feels awkward for most; pulling out your phone to respond to claude barely registers. That’s what makes it dangerous: It's so easy to feel some sense of progress now. Even when you’re tired and burned out, you can still make progress by just sending off a quick message. The quality will, of course, slip over time; but far less than it did previously.<p>Add in a weak labor market and people feel pressure to stay working all the time. Partly because everyone else is (and nobody wants to be at the bottom of the stack ranking), and partly because it’s easier than ever to avoid hitting a wall by just "one more message". Steve Yegge's point about AI vampires rings true to me: A lot of coworkers I’ve talked to feel burned out after just a few months of going hard with AI tools. Those same people are the ones working nights and weekends because "I can just have a back-and-forth with Claude while I'm watching a show now".<p>The likely result is the usual pattern for increases in labor productivity. People who can’t keep up get pushed out, people who can keep up stay stuck grinding, and companies get to claim the increase in productivity while reducing expenses.  Steve's suggestion for shorter workdays sound nice in theory, but I would bet significant amounts of money the 40-hour work week remains the standard for a long time to come.</p>
]]></description><pubDate>Mon, 16 Feb 2026 20:30:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=47039921</link><dc:creator>BoxFour</dc:creator><comments>https://news.ycombinator.com/item?id=47039921</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47039921</guid></item></channel></rss>