<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: shaldengeki</title><link>https://news.ycombinator.com/user?id=shaldengeki</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 21:18:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=shaldengeki" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by shaldengeki in "Migrating to Bazel symbolic macros"]]></title><description><![CDATA[
<p>> Is anyone really using bazel outside Google in any meaningful capacity?<p>Yes. For instance, Stripe uses Bazel internally for ~all of its builds. <a href="https://stripe.com/blog/fast-secure-builds-choose-two" rel="nofollow">https://stripe.com/blog/fast-secure-builds-choose-two</a><p>For other users, you might peruse the Bazelcon 2025 schedule, which happened earlier this month: <a href="https://bazelcon2025.sched.com/" rel="nofollow">https://bazelcon2025.sched.com/</a></p>
]]></description><pubDate>Tue, 25 Nov 2025 08:41:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=46043702</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=46043702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46043702</guid></item><item><title><![CDATA[New comment by shaldengeki in "Are Blue Light Blocking Glasses a $3B Scam? [video]"]]></title><description><![CDATA[
<p>I appreciate your position, but the record is pretty clear; his actions clearly meet Berkeley's own definition of research misconduct.</p>
]]></description><pubDate>Thu, 25 Sep 2025 05:54:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=45369679</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=45369679</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45369679</guid></item><item><title><![CDATA[New comment by shaldengeki in "Are Blue Light Blocking Glasses a $3B Scam? [video]"]]></title><description><![CDATA[
<p>You may be interested to read that the Why We Sleep guy committed research misconduct in the book. Guy is totally unrepentant about it.<p><a href="https://statmodeling.stat.columbia.edu/2020/03/24/why-we-sleep-a-tale-of-institutional-failure/" rel="nofollow">https://statmodeling.stat.columbia.edu/2020/03/24/why-we-sle...</a></p>
]]></description><pubDate>Wed, 24 Sep 2025 03:25:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=45355947</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=45355947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45355947</guid></item><item><title><![CDATA[New comment by shaldengeki in "US Visa Applications Must Be Submitted from Country of Residence or Nationality"]]></title><description><![CDATA[
<p>HN's moderation team has been pretty clear that generated comments aren't welcome here. <a href="https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=author%3Adang%20generated%20comments&sort=byDate&type=comment" rel="nofollow">https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...</a></p>
]]></description><pubDate>Sun, 07 Sep 2025 17:43:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45160421</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=45160421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45160421</guid></item><item><title><![CDATA[New comment by shaldengeki in "Fake accounts drove the DeepSeek AI hype and distorted markets"]]></title><description><![CDATA[
<p>I don't see any network analysis on this page. What network analysis do you see?<p>I do see generic statements like "boosting each other", and I see vaguely-drawn lines in the primary diagram with no further explanation, but that hardly counts as network analysis, right?</p>
]]></description><pubDate>Fri, 29 Aug 2025 16:33:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=45066231</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=45066231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45066231</guid></item><item><title><![CDATA[New comment by shaldengeki in "Claim: GPT-5-pro can prove new interesting mathematics"]]></title><description><![CDATA[
<p>Further in the thread, the guy notes that this isn't "new" mathematics - a better proof with tighter bounds was published in April:<p><a href="https://xcancel.com/SebastienBubeck/status/1958198667837329822#m" rel="nofollow">https://xcancel.com/SebastienBubeck/status/19581986678373298...</a></p>
]]></description><pubDate>Mon, 25 Aug 2025 04:11:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45010120</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=45010120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45010120</guid></item><item><title><![CDATA[New comment by shaldengeki in "AI Eroded Doctors' Ability to Spot Cancer Within Months in Study"]]></title><description><![CDATA[
<p>Should be in the first, not seventh paragraph: this was a survey of 19 doctors, who performed ~1400 colonoscopies.</p>
]]></description><pubDate>Wed, 13 Aug 2025 01:22:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44883675</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=44883675</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44883675</guid></item><item><title><![CDATA[New comment by shaldengeki in "More than 1 in 5 Show HN posts are now AI-related, get > half the votes/comments"]]></title><description><![CDATA[
<p>This shows a huge surge starting in 2023. I see you're counting all .AI TLDs; how much is this responsible for the surge? I think .AI TLD registrations took off starting in 2023, and one thing I wonder is if prior to 2023 we're mostly missing real AI Show HN entries, and afterwards we're mostly catching them.</p>
]]></description><pubDate>Sun, 06 Jul 2025 19:58:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44483552</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=44483552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44483552</guid></item><item><title><![CDATA[Collections: The American Civil-Military Relationship]]></title><description><![CDATA[
<p>Article URL: <a href="https://acoup.blog/2025/07/04/collections-the-american-civil-military-relationship/">https://acoup.blog/2025/07/04/collections-the-american-civil-military-relationship/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44468707">https://news.ycombinator.com/item?id=44468707</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 04 Jul 2025 23:08:12 +0000</pubDate><link>https://acoup.blog/2025/07/04/collections-the-american-civil-military-relationship/</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=44468707</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44468707</guid></item><item><title><![CDATA[New comment by shaldengeki in "I built something that changed my friend group's social fabric"]]></title><description><![CDATA[
<p>I think it's important to clarify that it's just audio and video that are E2EE, not text messages themselves. (You may have meant this, but "channels" was a little ambiguous.)</p>
]]></description><pubDate>Wed, 02 Jul 2025 01:53:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44439591</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=44439591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44439591</guid></item><item><title><![CDATA[New comment by shaldengeki in "Analyzing a Critique of the AI 2027 Timeline Forecasts"]]></title><description><![CDATA[
<p>Yes, that's correct. The authors themselves are being extremely careful (and, I'd argue, misleading) in their wording. The right way to interpret those words is "this is literally a model that supports our predictions".<p>Here is the primary author of the timelines forecast:<p>> In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.<p>> In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.<p><a href="https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models?commentId=n88sfbrqpqmJx6KBB" rel="nofollow">https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...</a><p>Here is one staff member at Lightcone, the folks credited with the design work on the website:<p>> I think the actual epistemic process that happened here is something like:<p>> * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon<p>> * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world<p>> * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to<p>> The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"<p><a href="https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models?commentId=Kwf3dg4D6ydoifajR" rel="nofollow">https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...</a></p>
]]></description><pubDate>Wed, 25 Jun 2025 01:15:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44372791</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=44372791</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44372791</guid></item><item><title><![CDATA[New comment by shaldengeki in "Analyzing a Critique of the AI 2027 Timeline Forecasts"]]></title><description><![CDATA[
<p>No, you're wrong. They wrote the story before coming up with the model!<p>In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.</p>
]]></description><pubDate>Wed, 25 Jun 2025 00:29:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44372538</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=44372538</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44372538</guid></item><item><title><![CDATA[New comment by shaldengeki in "A deep critique of AI2027s bad timeline models"]]></title><description><![CDATA[
<p>> One of the AI 2027 authors joked to me in the comments on a recent article that “you may not like it but it's what peak AI forecasting performance looks like”. Well, I don’t like it, and if this truly is “peak forecasting”, then perhaps forecasting should not be taken very seriously. Maybe this is because I am a physicist, not a Rationalist. In my world, you generally want models to have strong conceptual justifications or empirical validation with existing data before you go making decisions based off their predictions: this fails at both.<p>This is really, really great. I hope it gets as many eyeballs on it as AI 2027 did.<p>The same post on the author's Substack is here:
<a href="https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline" rel="nofollow">https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-b...</a></p>
]]></description><pubDate>Sat, 21 Jun 2025 00:49:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44333532</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=44333532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44333532</guid></item><item><title><![CDATA[New comment by shaldengeki in "Former DOGE engineer on his experience working for the cost-cutting unit"]]></title><description><![CDATA[
<p>This strikes me as disingenuous:<p>> I like when my software gets used by a lot of people and people send me nice emails. In this case, people weren't sending me the nicest emails, unfortunately. But they also didn't really know what I was doing. They saw DOGE, weren't a fan of certain things that they were associated with. But I think at the end of the day, like, the role of the U.S. Digital Service is to improve the UX (user experience) of being an American, which is pretty exciting. And anyone who lets me do that, I will try to work for, even if my friends and family aren't huge fans.<p>People weren't mad at him because he was trying to improve UX. They were mad at him because DOGE was doing a bunch of other stuff like "harassing government employees who talked about climate resilience", which he actively contributed to but conveniently omits from the interviews he's been giving recently:<p><a href="https://github.com/slavingia/va/blob/main/eos/analyze_eos.py#L268-L298">https://github.com/slavingia/va/blob/main/eos/analyze_eos.py...</a></p>
]]></description><pubDate>Thu, 05 Jun 2025 02:58:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=44187939</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=44187939</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44187939</guid></item><item><title><![CDATA[My Model of Language Models]]></title><description><![CDATA[
<p>Article URL: <a href="https://tomlee.wtf/2025/04/03/a-model-of-language-models/">https://tomlee.wtf/2025/04/03/a-model-of-language-models/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43577610">https://news.ycombinator.com/item?id=43577610</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 04 Apr 2025 02:06:59 +0000</pubDate><link>https://tomlee.wtf/2025/04/03/a-model-of-language-models/</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=43577610</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43577610</guid></item><item><title><![CDATA[New comment by shaldengeki in "Move fast, break things: A review of Abundance by Ezra Klein and Derek Thompson"]]></title><description><![CDATA[
<p>This is an extremely minor point, but:<p>> significant parts of our federal government have abandoned key precepts of outcome-driven problem-solving. (Just try navigating the IRS’ Free File tax return tool. I gave up and paid for H&R Block instead.)<p>It seems deeply weird to me to pick this example, of all things, where things are getting better because of outcome-driven problem-solving. (You might also want to consider that "I gave up on Free File" might score you anti-points with your audience.)</p>
]]></description><pubDate>Wed, 02 Apr 2025 23:47:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43563163</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=43563163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43563163</guid></item><item><title><![CDATA[Advancing Secure and Convenient Government Communications: The Case for Element]]></title><description><![CDATA[
<p>Article URL: <a href="https://element.io/blog/advancing-secure-convenient-government-communications-the-case-for-element/">https://element.io/blog/advancing-secure-convenient-government-communications-the-case-for-element/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43552448">https://news.ycombinator.com/item?id=43552448</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 01 Apr 2025 23:48:37 +0000</pubDate><link>https://element.io/blog/advancing-secure-convenient-government-communications-the-case-for-element/</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=43552448</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43552448</guid></item><item><title><![CDATA[New comment by shaldengeki in "FrontierMath Was Funded by OpenAI"]]></title><description><![CDATA[
<p>The title here buries the lede, IMO. Quoting from Epoch AI in the thread:<p>> We were restricted from disclosing the partnership until around the time o3 launched [...] Our contract specifically prevented us from disclosing information about the funding source and the fact that OpenAI has data access to much but not all of the dataset.<p>> Regarding training usage: We acknowledge that OpenAI does have access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities. However, we have a verbal agreement that these materials will not be used in model training.<p>Here, you can read "large fraction" as meaning "everything but the holdout set" - and my understanding is, they haven't disclosed performance on the holdout set.</p>
]]></description><pubDate>Sun, 19 Jan 2025 09:14:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=42755353</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=42755353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42755353</guid></item><item><title><![CDATA[New comment by shaldengeki in "Are Ghost Engineers Real?"]]></title><description><![CDATA[
<p>I don't think this article, or the underlying claims, are (yet) credible. I'm disappointed that the Washington Post wrote about this at all, and that people I respect a lot have amplified it on Twitter.<p>The claims come from an MBA student graduating from Stanford this year, who plans on starting a developer tooling company. [1] The faculty member (a professor of psychology) who supported this work doesn't comment at all on the claims made by the student; their single quote in the article instead emphasizes how difficult it is to measure productivity.<p>The claims are based on, from their own description, unpublished ongoing research. There's no description of what data they gathered, what their methods were, or how they assessed accuracy anywhere. (For starters: how did they identify who a software engineer was in each of the companies in their dataset? How did they exclude data scientists, or technical writers, or other engineering-adjacent folks who in some companies need to commit infrequently? How did they identify remote workers, and handle workers transitioning between roles or working arrangements?)<p>Without this extremely basic information, I think it makes sense to just ignore it. It's possible that it'll eventually pan out as real! But definitely not yet. Wild that people are taking it seriously.<p>[1] <a href="https://poetsandquants.com/2023/02/21/from-middle-school-dropout-to-stanford-mba-the-incredible-journey-of-yegor-denisov-blanch/" rel="nofollow">https://poetsandquants.com/2023/02/21/from-middle-school-dro...</a></p>
]]></description><pubDate>Mon, 09 Dec 2024 07:37:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42363808</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=42363808</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42363808</guid></item><item><title><![CDATA[New comment by shaldengeki in "Torrent of Hate for Health Insurance Industry Follows CEO's Killing"]]></title><description><![CDATA[
<p>Here's a 14 year-old series showing that providers are the biggest chunk of the overspending in the system:<p><a href="https://web.archive.org/web/20210421025041/http://theincidentaleconomist.com/wordpress/what-makes-the-us-health-care-system-so-expensive-introduction/" rel="nofollow">https://web.archive.org/web/20210421025041/http://theinciden...</a></p>
]]></description><pubDate>Fri, 06 Dec 2024 09:09:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=42337872</link><dc:creator>shaldengeki</dc:creator><comments>https://news.ycombinator.com/item?id=42337872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42337872</guid></item></channel></rss>