<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: math_dandy</title><link>https://news.ycombinator.com/user?id=math_dandy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 16 Apr 2026 05:08:26 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=math_dandy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by math_dandy in "Sheafification – The optimal path to mathematical mastery: The fast track (2022)"]]></title><description><![CDATA[
<p>Not sure about these books as a self-study curriculum — their unifying theme seems to be that they require a reasonable level of mathematical maturity going in. But, they absolutely comprise an excellent “greatest hits” list of math books in the most influential subdisciplines. You’re guaranteed to learn a tonne if you study any one of these books.</p>
]]></description><pubDate>Sun, 31 Aug 2025 17:17:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=45084926</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=45084926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45084926</guid></item><item><title><![CDATA[New comment by math_dandy in "The lottery ticket hypothesis: why neural networks work"]]></title><description><![CDATA[
<p>I don't buy the narrative that the article is promoting.<p>I think the machine learning community was largely over overfitophobia by 2019 and people were routinely using overparametrized models capable of interpolating their training data while still generalizing well.<p>The Belkin et al. paper wasn't heresy. The authors were making a technical point - that certain theories of generalization are incompatible with this interpolation phenomenon.<p>The lottery ticket hypothesis paper's demonstration of the ubiquity of "winning tickets" - sparse parameter configurations that generalize - is striking, but these "winning tickets" aren't the solutions found by stochastic gradient descent (SGD) algorithms in practice. In the interpolating regime, the minima found by SGD are simple in a different sense perhaps more closely related to generalization. In the case of logistic regression, they are maximum margin classifiers; see <a href="https://arxiv.org/pdf/1710.10345" rel="nofollow">https://arxiv.org/pdf/1710.10345</a>.<p>The article points out some cool papers, but the narrative of plucky researchers bucking orthodoxy in 2019 doesn't track for me.</p>
]]></description><pubDate>Mon, 18 Aug 2025 22:58:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44946215</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44946215</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44946215</guid></item><item><title><![CDATA[New comment by math_dandy in "Imagen 4 is now generally available"]]></title><description><![CDATA[
<p>I was going to nitpick the missing apostrophe in movie posters caption ("STARFALLS REVENGE") but its missing from the prompt, too.</p>
]]></description><pubDate>Fri, 15 Aug 2025 19:09:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44916318</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44916318</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44916318</guid></item><item><title><![CDATA[New comment by math_dandy in "Imagen 4 is now generally available"]]></title><description><![CDATA[
<p>To the left of the "detailed spaceship" I think I see a distortion pattern reminiscent of a cloaked Klingon bird of prey moving to the right. Or I'm just hallucinating patterns in nebular noise.</p>
]]></description><pubDate>Fri, 15 Aug 2025 19:03:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44916267</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44916267</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44916267</guid></item><item><title><![CDATA[New comment by math_dandy in "GPT-5"]]></title><description><![CDATA[
<p>Two schools of thought here. One posits that models need to have a strict "symbolic" representation of the world explicitly built in by their designers before they will be able to approach human levels of ability, adaptability and reliability. The other thinks that models approaching human levels of ability, adaptability, and reliability will constitute evidence for the emergence of strict "symbolic" representations.</p>
]]></description><pubDate>Thu, 07 Aug 2025 20:06:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44829692</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44829692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44829692</guid></item><item><title><![CDATA[New comment by math_dandy in "ZjsComponent: A Pragmatic Approach to Reusable UI Fragments for Web Development"]]></title><description><![CDATA[
<p>TLDR: Browser vendors made Shadow DOM for themselves.<p>Browser implementors use Shadow DOM extensively under the hood for built-in HTML elements with internal structure like range inputs, audio and video controls, etc. These elements absolutely need to work everywhere and be consistent, so extreme encapsulation and fixed api for styling them is an absolute must.<p>The Shadow DOM API is the browsers exposing, to developers, a foundational piece of functionality.<p>If you’re thinking about whether Shadow DOM is appropriate for your use case, consider how/why the vendors use it —- when an element’s API needs to be totally locked down to guarantee it works in contexts they have no control over. Conversely, if your potential use case is scoped to a single project, the encapsulation imposed (necessarily!) by Shadow DOM is probably overkill.<p>Web components are a decent way to make reusable UI, but if they don’t have strong encapsulation needs, you might avoid Shadow DOM.</p>
]]></description><pubDate>Tue, 17 Jun 2025 00:44:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44294684</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44294684</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44294684</guid></item><item><title><![CDATA[New comment by math_dandy in "Seven replies to the viral Apple reasoning paper and why they fall short"]]></title><description><![CDATA[
<p>I was hoping the accepted definition would not use humans as a baseline, rather that humans would be an (the) example of AGI.</p>
]]></description><pubDate>Sun, 15 Jun 2025 02:51:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44280202</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44280202</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44280202</guid></item><item><title><![CDATA[New comment by math_dandy in "Waymo rides cost more than Uber or Lyft and people are paying anyway"]]></title><description><![CDATA[
<p>In-car product vending will come soon enough I’m sure.</p>
]]></description><pubDate>Sat, 14 Jun 2025 19:23:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44278250</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44278250</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44278250</guid></item><item><title><![CDATA[New comment by math_dandy in "V-JEPA 2 world model and new benchmarks for physical reasoning"]]></title><description><![CDATA[
<p>Could you give more details about what precisely you mean by interpolation and generalization? The commonplace use of “generalization” in the machine learning textbooks I’ve been studying is model performance (whatever metric is deemed relevant) on new data from the training distribution. In particular, it’s meaningful when you’re modeling p(y|x) and not the generative distribution p(x,y).</p>
]]></description><pubDate>Wed, 11 Jun 2025 21:45:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44252154</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44252154</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44252154</guid></item><item><title><![CDATA[New comment by math_dandy in "Why quadratic funding is not optimal"]]></title><description><![CDATA[
<p>I’m reading a winking, ironic acknowledgement from the authors that the mathematical definition of individual utility may not map perfectly onto the psychology of a patron of the arts.</p>
]]></description><pubDate>Mon, 09 Jun 2025 18:18:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44227447</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44227447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44227447</guid></item><item><title><![CDATA[New comment by math_dandy in "Ask HN: How to learn CUDA to professional level"]]></title><description><![CDATA[
<p>Are there any GPU emulators you can use to run simple CUDA programs on a commodity laptops, just to get comfortable with the mechanics, the toolchain, etc.?</p>
]]></description><pubDate>Sun, 08 Jun 2025 15:49:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44217692</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44217692</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44217692</guid></item><item><title><![CDATA[New comment by math_dandy in "AI makes the humanities more important, but also weirder"]]></title><description><![CDATA[
<p>As a university professor, I admit with some shame that accessibility issues for specific problem types is not on my radar. “Innovation” isn’t the main culprit here.<p>Fortunately, my university has a good accessibility center that takes care of accommodation issues (large print versions of tests, etc.). I just send them my tests and they take care of it. It’s a great service, and absolutely crucial because I simply don’t have the time to customize assessments. I assume they would get in touch if they were unable to retrofit accessibility onto an assessment, but that hasn’t happened in my fifteen years of employment.</p>
]]></description><pubDate>Tue, 03 Jun 2025 13:56:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44170224</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44170224</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44170224</guid></item><item><title><![CDATA[New comment by math_dandy in "Younger generations less likely to have dementia, study suggests"]]></title><description><![CDATA[
<p>How was smoking identified as the cause’s of dementia in the individual you mention?</p>
]]></description><pubDate>Tue, 03 Jun 2025 01:42:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44165396</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44165396</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44165396</guid></item><item><title><![CDATA[New comment by math_dandy in "RSC for Lisp Developers"]]></title><description><![CDATA[
<p>I think RSC is trying to answer the question, “How can we make server rendering and sprinkles of interactivity <i>composable</i>?” What if you want your sprinkles to have server rendered content inside of them, each of which may contain other interactive/dynamic elements?<p>I posit that any composable version of sprinkles or the “island architecture” will closely resemble RSC.<p>Only a small fraction of apps will ever use the full power of the RSC architecture. However, the React team doesn’t build apps, they build primitives for building apps. And good primitives are composable.</p>
]]></description><pubDate>Sun, 01 Jun 2025 18:07:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44152701</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44152701</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44152701</guid></item><item><title><![CDATA[New comment by math_dandy in "The Two Ideals of Fields"]]></title><description><![CDATA[
<p>Historically, mathematicians have spent a huge amount of time and effort formulating optimal axioms and foundations so that theorems would follow naturally from structure. Theorems following “trivially” from a theoretical framework that took years to develop isn’t an indictment of the theorem, but an endorsement of the incredible effort expended to develop an optimal context for expressing and understanding the theorem.</p>
]]></description><pubDate>Sat, 31 May 2025 15:37:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=44144951</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44144951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44144951</guid></item><item><title><![CDATA[New comment by math_dandy in "Running GPT-2 in WebGL: Rediscovering the Lost Art of GPU Shader Programming"]]></title><description><![CDATA[
<p>Cool, I had not heard about this. Adding this paper to my machine learning teaching bibliography.<p>Even though the start of the deep learning renaissance is typically dated to 2012 with Alexnet, things were in motion week before that. As you point out, GPU training was validated at least 8 years previously. Concurrently, some very prescient researchers like Li were working hard to generate large scale datasets like ImageNet (CVPR 2009, <a href="https://www.image-net.org/static_files/papers/imagenet_cvpr09.pdf" rel="nofollow">https://www.image-net.org/static_files/papers/imagenet_cvpr0...</a>). And in 2012 it all came together.</p>
]]></description><pubDate>Tue, 27 May 2025 21:41:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=44110882</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44110882</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44110882</guid></item><item><title><![CDATA[New comment by math_dandy in "Trying to teach in the age of the AI homework machine"]]></title><description><![CDATA[
<p>I think this is a good approach.</p>
]]></description><pubDate>Tue, 27 May 2025 01:22:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=44103125</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44103125</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44103125</guid></item><item><title><![CDATA[New comment by math_dandy in "Trying to teach in the age of the AI homework machine"]]></title><description><![CDATA[
<p>Proctoring services done well could be valuable, but it’s smaller rural and remote communities that would benefit most. Maybe these services could be offered by local schools, libraries, etc.</p>
]]></description><pubDate>Tue, 27 May 2025 00:13:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=44102823</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44102823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44102823</guid></item><item><title><![CDATA[New comment by math_dandy in "Trying to teach in the age of the AI homework machine"]]></title><description><![CDATA[
<p>We have an Accessible Testing Center that will administer and proctor exams under very flexible conditions (more time, breaks, quiet/privacy, …) to help students with various forms of neurodivergence. They’re very good and offer a valuable service without placing any significant additional burden on the instructor. Seems to work well, but I don’t have first hand knowledge about how these forms of accommodations are viewed by the neurodivergent student community. They certainly don’t address the problem of allowing « explorer » students to demonstrate their abilities.</p>
]]></description><pubDate>Mon, 26 May 2025 22:36:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44102303</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44102303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44102303</guid></item><item><title><![CDATA[New comment by math_dandy in "Trying to teach in the age of the AI homework machine"]]></title><description><![CDATA[
<p>Centralization and IT-ification has made flouting difficult. There’s one common course site on the institution’s learning management system for all sections where assignments are distributed and collected via upload dropbox, where grades are tabulated and communicated.<p>So far, it’s still possible to opt out of this coordinated model, and I have been. But I suspect the ability to opt out will soon come under attack (the pretext will be ‘uniformity == fairness’). I never used to be an academic freedom maximalists who viewed the notion in the widest sense, but I’m beginning to see my error.</p>
]]></description><pubDate>Mon, 26 May 2025 22:28:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44102260</link><dc:creator>math_dandy</dc:creator><comments>https://news.ycombinator.com/item?id=44102260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44102260</guid></item></channel></rss>