<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ADeerAppeared</title><link>https://news.ycombinator.com/user?id=ADeerAppeared</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 17:06:16 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ADeerAppeared" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ADeerAppeared in "Introducing deep research"]]></title><description><![CDATA[
<p>> If you care then you'll fact check before publishing.<p>Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.<p>> I don't see why this changes.<p>And because of the above this system should not exist.</p>
]]></description><pubDate>Mon, 03 Feb 2025 02:22:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=42914243</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42914243</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42914243</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Introducing deep research"]]></title><description><![CDATA[
<p>> The proof is in the fact that it flies, not what it is constructed from.<p>And LLMs do not.<p>> "But it looks like reasoning to me"<p>My condolences. You should go see a doctor about your inability to count the number of 'R's in a word.</p>
]]></description><pubDate>Mon, 03 Feb 2025 02:20:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42914232</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42914232</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42914232</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Gradual Disempowerment: How Even Incremental AI Progress Poses Existential Risks"]]></title><description><![CDATA[
<p>It's called absurd not because it's not understood, or because there aren't technical counterarguments to be made.<p>It's called absurd because it does not deserve to be humoured the effort of writing out those arguments.<p>> Here's some names in the field. 15/19 think the risk is significant<p>A list that is largely a pile of clowns and morons, many with direct financial interests in amplifying the "danger"/power of AI.<p>This is why the doomsday cult is not taken serious.</p>
]]></description><pubDate>Mon, 03 Feb 2025 00:53:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=42913633</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42913633</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42913633</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Ask HN: What is interviewing like now with everyone using AI?"]]></title><description><![CDATA[
<p>> The employer sets the terms of the interview. If you don’t like them, don’t apply.<p>What you're missing here is that this is an individual's answer to a systemic problem. You don't apply when it's _one_ obnoxious employer.<p>When it's standard practice across the entire industry, we have a problem.<p>> submitting a fraudulent resume because you disagree with the required qualifications.<p>This is already worryingly common practice because employers lie about the required qualifications.<p>Honesty gets your resume shredded before a human even looked at it. And employers refusing to address that situation is just making everything worse and worse.</p>
]]></description><pubDate>Mon, 03 Feb 2025 00:50:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=42913610</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42913610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42913610</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Introducing deep research"]]></title><description><![CDATA[
<p>I'm sorry but what the fuck is this product pitch?<p>Anyone who's done any kind of substantial document research knows that it's a <i>NIGHTMARE</i> of chasing loose ends & citogenesis.<p>Trusting an LLM to critically evaluate every source and to be deeply suspect of any unproven claim is a ridiculous thing to do. These are not hard reasoning systems, they are probabilistic language models.</p>
]]></description><pubDate>Mon, 03 Feb 2025 00:44:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=42913555</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42913555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42913555</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "CDC: Unpublished manuscripts mentioning certain topics must be pulled or revised"]]></title><description><![CDATA[
<p>> It isn't banned<p>Yet. They have clearly voiced their desire for this.<p>> it just won't be state funded.<p>This isn't just "The government is not funding research into this", this is the government maintaining a list of thoughtcrime and banning researchers from using words.</p>
]]></description><pubDate>Sun, 02 Feb 2025 16:51:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=42909805</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42909805</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42909805</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Gradual Disempowerment: How Even Incremental AI Progress Poses Existential Risks"]]></title><description><![CDATA[
<p>In what sense?<p>The "Mugging" going on is that "AI safety" folks proclaim that AI might have an "extinction risk" or infinite-negative outcome.<p>And they proclaim that therefore, we should be devoting considerable resources (i.e. on the scale of billions) to avoiding that even if the actual chance of this scenario is minimal to astronomically small. "ChatGPT won't kill us now, but in 1000 years it might" kinda shit. For some this ends with "and therefore you need to approve my research funding application", for others (including Altman) it has mutated into "We must build AGI first because we're the only people who can do it without destroying the world".<p>The problem is that this is absurd. They're focussing on a niche scenario whilst ignoring horrific problems caused in the here and now.<p>"Skynet might happen in Y3K" is no excuse to flood the current internet with AI slop, create a sizeable economic bubble, seek to replace entire economic sectors with outsourced "Virtual" employees, and perhaps most ethically concerning of all: create horrific CSAM torment nexuses where even near-destitute gig economy workers in Kenya walk out of the job.<p>Yet "AI safety" folks would have you believe so.</p>
]]></description><pubDate>Sat, 01 Feb 2025 20:53:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=42902158</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42902158</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42902158</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Gradual Disempowerment: How Even Incremental AI Progress Poses Existential Risks"]]></title><description><![CDATA[
<p>> It feels like they always anthropomorphize AI as some sort of "God".<p>It's not <i>like</i> that. It <i>is</i> that. They're playing Pascal's Wager against an imaginary future god.<p>The most maddening part is that the obvious problem with that has been well identified by those circles, dubbed "Pascal's Mugging", but they're still rambling on about "extinction risk" whilst disregarding the very material ongoing issues AI causes.<p>They're all clowns whose opinions are to be immediately discarded.</p>
]]></description><pubDate>Sat, 01 Feb 2025 17:03:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42899852</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42899852</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42899852</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Add "fucking" to your Google searches to neutralize AI summaries"]]></title><description><![CDATA[
<p>> The same argument applies to essentially all technology, like a computer.<p>Why yes, it does.<p>Even setting aside that most AI hype: Yes, automation is in fact quite sinister if you do not go out of your way to deal with the downsides. Putting people out of a job is bad, actually.<p>Yes. The industrial revolution was a great boon to humanity that drastically improved quality of living and wealth. It also created horrific torment nexuses like mechanical looms into which we sent small children to get maimed.<p>And we absolutely could've had the former without the latter; Child labour laws handily proved it was possible, and should have been implemented far sooner.</p>
]]></description><pubDate>Sat, 01 Feb 2025 01:50:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=42894831</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42894831</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42894831</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Instagram and Facebook Blocked and Hid Abortion Pill Providers' Posts"]]></title><description><![CDATA[
<p>This is a hilarious claim given that none of the current action is going through the legislative path, and the tech billionaires freely bend the knee to Trump even before the inauguration.<p>What's even the material point here? That "the left" pierced the taboo on speech censorship? Trump's currently wiping his ass with the separation of powers enshrined in the constitution. He does not care about taboo.</p>
]]></description><pubDate>Fri, 31 Jan 2025 21:13:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=42892082</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42892082</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42892082</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "NSF starts vetting all grants to comply with executive orders"]]></title><description><![CDATA[
<p>> Your stereotypes belie a lack of familiarity with researchers<p>I was referencing what the current Trump administration deems "meritocratic" and seeks to "return" to, their policy changes are in direct response and opposition to the demographics you describe.</p>
]]></description><pubDate>Fri, 31 Jan 2025 14:02:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=42887747</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42887747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42887747</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Hedge fund warns White House is inflating crypto bubble that could wreak havoc"]]></title><description><![CDATA[
<p>Look at Albania for why this is bad:<p><a href="https://en.wikipedia.org/wiki/Pyramid_schemes_in_Albania" rel="nofollow">https://en.wikipedia.org/wiki/Pyramid_schemes_in_Albania</a><p>Crypto is different from things like housing (where the "bubble" is merely artificially restricted supply driving the price up, so it's a real price increase) or the stock market (Where the fundamentals are real enough that a crash in e.g. AI-stocks will hurt, but not be systemically-destructive. Nvidia, microsoft, etc are all still going to exist as very profitable companies)</p>
]]></description><pubDate>Fri, 31 Jan 2025 12:51:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42887157</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42887157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42887157</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Hedge fund warns White House is inflating crypto bubble that could wreak havoc"]]></title><description><![CDATA[
<p>> Sure you can say that crypto is only bubble, but that doesn't really add any information. The bubble can't be "inflating" since there's no ratio to a non-bubble version.<p>The core point is that there is (essentially^[1]) nothing keeping the price up besides speculator interest. That means the price can collapse catastrophically. Saying crypto is a bubble is useful because it has the defining traits and dangers of a bubble.<p>[1]: The big exception here is that a lot of the price is also kept afloat by the absolutely ridiculous amount of financial fraud in this ecosystem. Most of the "dollars" chasing Bitcoin are fake, and it's still unclear how insolvent the big stablecoins are.<p>The little exception is that there is a minimum floor; Cryptocurrencies have some utility as a payment system, which would give them some value. But the market rate for consumer payments is effectively if not literally zero, well below the amounts required to operate the mining/staking systems these currencies require.</p>
]]></description><pubDate>Fri, 31 Jan 2025 12:47:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42887133</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42887133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42887133</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "NSF starts vetting all grants to comply with executive orders"]]></title><description><![CDATA[
<p>>Why should somebody researching e.g. fusion for the Department of Energy also need to create a Promoting Inclusive and Equitable Research (PIER) plan, to even apply?<p>Why?<p>Because a homogenous culture of researchers is less effective.<p>Because you are not just doing research on a topic, you are also training the next generation of scientists and field experts.<p>And the implication that the old boys club of white dudes is intrinsically the best "meritocratic" outcome is ridiculous. The history of science is full of people who had to fight that norm and succeed despite it.<p>> This should greatly reduce the overall bureaucratic nonsense in science and help get back to science simply being science without imposing ideological conformity tests.<p>Sure, sure. Except for the part where they're also censoring any science topics deemed "woke", where all funding now has to meet the president's ideological conformity test on subject and staff as well.</p>
]]></description><pubDate>Fri, 31 Jan 2025 12:41:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42887097</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42887097</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42887097</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Launch HN: Karsa (YC W25) – Buy and save stablecoins internationally"]]></title><description><![CDATA[
<p>> we do KYC<p>And what then, is stopping governments from simply demanding you hand over a list of your customers? They will seek to enforce those currency controls you are subverting.<p>Your entire sales pitch here is based on a lack of transparency to "evil oppressive governments", whereas the US government (at least, once it gets it's shit together again in a few years), will just delete your company for helping the North Koreans evade sanctions if you don't have quite robust AML.</p>
]]></description><pubDate>Thu, 30 Jan 2025 20:02:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=42881582</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42881582</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42881582</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Cali's AG Tells AI Companies Almost Everything They're Doing Might Be Illegal"]]></title><description><![CDATA[
<p>Firstly, do note that the original text is talking about <i>use</i> of AI, this one line doesn't seem to be about <i>banning AI</i> so much as <i>banning a use</i>.<p>Secondly:<p>The implied basis here is that AI isn't just a product, it's also a service. This isn't you buying a pencil, it's you commissioning the drawing. Most of these products are cloud based SaaS.<p>And there's also the matter that "it's just a tool" doesn't really apply to foreseeable problems. If a suspicious person shows up out of nowhere buying large quantities of fertilizer, you don't get to go "Well he could be using that fertilizer for anything, not my problem". (This is relevant to AI as pretty much all AI services already have heavy restrictions on their output, this isn't a bunch of researchers publishing a paper and having bad actors implement their own AI based on that. We have companies openly advertising deepfake services.)</p>
]]></description><pubDate>Wed, 29 Jan 2025 19:18:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=42869758</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42869758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42869758</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "OpenAI says it has evidence DeepSeek used its model to train competitor"]]></title><description><![CDATA[
<p>Their way of squaring this circle has always been to whine about "AI safety". (the cultish doomsday shit, not actual harms from AI)<p>Sam Altman will proclaim that he alone is qualified to build AI and that everyone else should be tied down by regulation.<p>And it should always be said that this is, of course, utterly ridiculous. Sam Altman literally got fired over this, has an extensive reputation as a shitweasel, and OpenAI's constant flouting and breaking of rules and social norms indicates they CANNOT be trusted.</p>
]]></description><pubDate>Wed, 29 Jan 2025 15:16:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=42865865</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42865865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42865865</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "Knowing less about AI makes people more open to using it"]]></title><description><![CDATA[
<p>> (Don’t answer that.)<p>It is nevertheless important to say out loud:<p>> Why is getting people to use AI seen as a good in itself?<p>Because user counts pump up the stock price. And that is all AI has.<p>Whether you believe the claims that inference is profitable or not (and there are good reasons to distrust them), AI does not live up to the financial hype.<p>AI cannot stand on it's own merits. It's not acceptable to let history run it's course and let the AI skeptics be shown wrong in due time. Because it'll dampen the hype, and perhaps these skeptics aren't so wrong. The people can't be educated into a healthy skepticism of AI, because they wouldn't use it enough.<p>It's readily obvious that the emperor has no clothes. The actions of the companies and executives involved betray their statements about how great AI is.<p>AI is forced into products, at deeply subsidized prices. You wouldn't do that if the tech is that big a deal. Apple charged premium prices for the iPhone.<p>Benchmarks are aggressively cheated. OpenAI funding FrontierMath and only giving a verbal agreement after having already broken so many of those is a joke. If the systems actually worked as promised there is no reason for this mess, and every reason in the world to gather accurate data on the generality of the intelligence.<p>And biggest of all: This entire mess has the implied framing of the Manhattan Project. That it's all a big race towards AGI, and whomever develops AGI will win capitalism forever. So important that they're getting support from the US government with their "Stargate" project. And until rather recently, everyone was making lots of noise about AI safety and the world-destroying dangers of letting someone else develop AGI.<p>In 1942 Georgii Flyorov figured out the Manhattan Project's existance from the sudden silence in nuclear fission research.<p>Today, despite stakes that are proclaimed to be even higher, all the big players will not shut up about their accomplishments. Everything is aggressively published and propagandized. Every single fart an AI model makes is spun into a research paper. You might as well mail the model weights directly to Beijing.<p>Those are not the actions of companies trying to win an R&D race. Those are the actions of companies pushing up their stock price by any means necessary.</p>
]]></description><pubDate>Mon, 27 Jan 2025 03:18:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=42837037</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42837037</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42837037</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "US Department of Labor to Cease and Desist All Investigation and Enforcement"]]></title><description><![CDATA[
<p>He, and most of the executives like him, are just being idiots about this particular thing.<p>Our field is notoriously hard to quantify in terms of productivity. The metrics (Lines of Code, Velocity, etc) are all garbage. Rather than learning how to manage what they can't measure, they just latch onto hours of work.<p>They have no clue on how to make things more efficient so they just demand longer hours.<p>> I'm not sure there is any consideration for a work life balance.<p>There isn't. The entire idea of working long hours & long weeks immediately falls apart under the slightest interrogation.<p>Anyone who's done serious knowledge work knows it's physically tiring to actually use your brain, so much so that even doing it at full 100% throttle for merely 8 hours is farcical. Anyone can work for 12 hours a day if they're sitting on their ass barely doing anything.<p>This also readily shows in the stats. Japan ain't a productivity superpower. Despite the praise of Indian H1Bers, India sure ain't a productivity superpower either.<p>Long hours are just pointless virtue signalling. "Look how loyal to the company I am". Something the western world was suppose to be better than.<p>> For a young person starting out, that may be acceptable.<p>It's just desperation. With cost of living so high many don't have a choice. And this field's allergy to unionization hasn't made it better.<p>The few who proudly proclaim their ability to "work 80 hour weeks" are delusional.</p>
]]></description><pubDate>Sat, 25 Jan 2025 22:35:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=42825604</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42825604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42825604</guid></item><item><title><![CDATA[New comment by ADeerAppeared in "US Department of Labor to Cease and Desist All Investigation and Enforcement"]]></title><description><![CDATA[
<p>> Some states (CA, NY, IL, WA, OR come to mind)<p>Yes. My concern is the opposite here; A lot of red states are gleefully following every whim of the Trump Administration, and we can expect their state-level equivalent rules (insofar they exist, which a fair amount of states already fail) to be revoked soon.<p>> The fact of the matter is, most employers aren’t naive and have always found legal proxy rationales to discriminate and prevent hiring or fire someone.<p>It's certainly a problem.<p>But secret and-or implied agreements to discriminate are less effective, and subject to obstruction by wilfully-ignorant staff. Where put into writing, you get things like Eric Schmidt creating evidence for the tech antipoaching cartel of the 2000s.<p>Letting companies get away with explicit & stated policies is much worse.<p>> for example in this thread, by not calling their white workers “retarded” (not aware of this happening but it seems reasonable).<p>Info on that particular reference: <a href="https://www.independent.co.uk/politics/elon-musk-americans-visas-vivek-ramaswamy-b2670592.html" rel="nofollow">https://www.independent.co.uk/politics/elon-musk-americans-v...</a></p>
]]></description><pubDate>Sat, 25 Jan 2025 22:22:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42825496</link><dc:creator>ADeerAppeared</dc:creator><comments>https://news.ycombinator.com/item?id=42825496</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42825496</guid></item></channel></rss>