<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: _hark</title><link>https://news.ycombinator.com/user?id=_hark</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 06 May 2026 08:26:37 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=_hark" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by _hark in "New statue in London, attributed to Banksy, of a suited man, blinded by a flag"]]></title><description><![CDATA[
<p>Yeah. The safety of the message is underwritten by its state sanction.</p>
]]></description><pubDate>Sun, 03 May 2026 21:45:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=48001868</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=48001868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48001868</guid></item><item><title><![CDATA[New comment by _hark in "Billion-Parameter Theories"]]></title><description><![CDATA[
<p>You literally can do a kind of model PCA, using the Hessian (matrix of second derivatives of the loss function w/r/t the parameters, aka the local curvature of the loss landscape), and diagonalizing. These eigenvectors and eigenvalues (the spectrum of the Hessian) tend to be power-law distributed in just about every deep NN you can think of [1].<p>That is, there are a few "really important" (highly curved) dimensions in parameter space (the top eigenvectors) which control the model's performance (the loss function). Conversely, there are very many "unimportant"/low curvature dimensions in the model. There was a recent interesting paper that showed that "deleting" these low-curvature dimensions appeared to correspond to removing "memorized" information in LLMs, such that their reasoning performance was left unchanged while their ability to answer questions which require some memorized knowledge was reduced [2].<p>It appears that sometimes models undergo dramatic transitions from memorization to perfect generalization, which corresponds to the models becoming much more compressible [3].<p>I'm hopeful that we'll find a way to distill the models down to the most useful core cognitive/reasoning capabilities, and that that core will be far simpler than the current scale of LLMs. But they might need to look stuff up like we do without all that memorized world knowledge!<p>[1]: <a href="https://openreview.net/pdf?id=o62ZzfCEwZ" rel="nofollow">https://openreview.net/pdf?id=o62ZzfCEwZ</a><p>[2]: <a href="https://www.goodfire.ai/research/understanding-memorization-via-loss-curvature" rel="nofollow">https://www.goodfire.ai/research/understanding-memorization-...</a><p>[3]: <a href="https://arxiv.org/abs/2412.09810" rel="nofollow">https://arxiv.org/abs/2412.09810</a></p>
]]></description><pubDate>Wed, 11 Mar 2026 02:06:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47331048</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=47331048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47331048</guid></item><item><title><![CDATA[New comment by _hark in "Tesla is heading into multi-billion-dollar iceberg of its own making"]]></title><description><![CDATA[
<p>I don't recall Andrej making "next year!" claims, it was always Elon. I found Andrej's talks from that time to be circumspect and precise in describing their ideas and approach, and not engaging in timeline speculation.</p>
]]></description><pubDate>Tue, 21 Oct 2025 13:03:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45655247</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=45655247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45655247</guid></item><item><title><![CDATA[New comment by _hark in "Tesla is heading into multi-billion-dollar iceberg of its own making"]]></title><description><![CDATA[
<p>They really should have just marketed the software "as-is" to whatever extent that is allowed by law. I guess they didn't because deployed automobile software is probably not allowed to be considered experimental.<p>Still, comms that framed it like: "This software purchase upgrades your car with state-of-the-art autonomy capabilities from our AI team, as we approach full self-driving" would have been more honest, still exciting to consumers, and avoided over-promising.</p>
]]></description><pubDate>Tue, 21 Oct 2025 12:25:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45654951</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=45654951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45654951</guid></item><item><title><![CDATA[New comment by _hark in "If the University of Chicago won't defend the humanities, who will?"]]></title><description><![CDATA[
<p><a href="https://archive.ph/GWBEl" rel="nofollow">https://archive.ph/GWBEl</a></p>
]]></description><pubDate>Sun, 05 Oct 2025 17:08:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45483276</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=45483276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45483276</guid></item><item><title><![CDATA[New comment by _hark in "Oxford loses top 3 university ranking in the UK"]]></title><description><![CDATA[
<p>There aren't merit-based scholarships to any Ivy League schools, they all offer need-based financial aid packages.</p>
]]></description><pubDate>Sun, 21 Sep 2025 20:58:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45326577</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=45326577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45326577</guid></item><item><title><![CDATA[New comment by _hark in "Oxford loses top 3 university ranking in the UK"]]></title><description><![CDATA[
<p>I'm a researcher at Oxford, and I've both taught and studied here and in the US.<p>The undergraduate teaching here is phenomenal. It's incredibly labor intensive for the staff, but the depth and breadth students are exposed to in their subject is astonishing. It's difficult to imagine how it can be improved.<p>My favorite study of university rankings comes from faculty hiring markets, which compute implicit rankings by measuring which institutions tend to hire (PhD->faculty) from others. [1] It's not perfect, but at the very least it's a parameter free way to get a sense of how different universities view each other. The parameters in most university rankings are rather arbitrary and game-able.<p>Some have pointed to things like contextual admissions [2], and more broadly some identity politics capture of the administration for declining standards. While this might be true, in my view Oxford is still far more meritocratic than US institutions on the whole. There are no legacy admissions, and many subjects have difficult tests which better distinguish between applicants who have all done extremely well on national standardised tests (British A Levels are far more difficult than the SAT/ACT/AP exams.)<p>Lastly, admissions at Oxford are devolved to the individual colleges, of which there are ~40. The faculty at each college directly interview and select the applicants which they will take as students. This devolved system and the friction it creates is surprisingly robust and makes complete ideological capture more difficult.<p>The most pressing issue for Oxford's long-term viability as a leading institution, in my view, is the funding situation. For one the British economy is in a long, slow decline. Secondly, even though Oxford has money, there are lots of regulations/soft power influence from the British govt to standardise pay across the country, which makes top institutions like Oxford less competitive on the international market for PhD students, postdocs, and faculty in terms of pay.<p>[1]: <a href="https://www.science.org/doi/10.1126/sciadv.1400005" rel="nofollow">https://www.science.org/doi/10.1126/sciadv.1400005</a><p>[2]: <a href="https://www.ox.ac.uk/admissions/undergraduate/applying-to-oxford/decisions/contextual-data" rel="nofollow">https://www.ox.ac.uk/admissions/undergraduate/applying-to-ox...</a></p>
]]></description><pubDate>Sun, 21 Sep 2025 17:54:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=45325058</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=45325058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45325058</guid></item><item><title><![CDATA[New comment by _hark in "All Souls exam questions and the limits of machine reasoning"]]></title><description><![CDATA[
<p>I sat the All Souls exam, taking the philosophy specialist papers, though I'm a math/physics/ML guy. It was a lot of fun, I really appreciate that there's somewhere in the world where these kinds of questions are asked in a formal setting. My questions/answers are written up in brief here [1]<p>[1] <a href="https://www.reddit.com/r/oxforduni/comments/q0giir/my_all_souls_exam_experience/" rel="nofollow">https://www.reddit.com/r/oxforduni/comments/q0giir/my_all_so...</a><p>* Oops, they link to my post at the bottom. Sorry for the redundancy.</p>
]]></description><pubDate>Thu, 14 Aug 2025 20:49:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44905552</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=44905552</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44905552</guid></item><item><title><![CDATA[New comment by _hark in "Genie 3: A new frontier for world models"]]></title><description><![CDATA[
<p>Very cool! I've done research on reinforcement/imitation learning in world models. A great intro to these ideas is here: <a href="https://worldmodels.github.io/" rel="nofollow">https://worldmodels.github.io/</a><p>I'm most excited for when these methods will make a meaningful difference in robotics. RL is still not quite there for long-horizon, sparse reward tasks in non-zero-sum environments, even with a perfect simulator; e.g. an assistant which books travel for you. Pay attention to when virtual agents start to really work well as a leading signal for this. Virtual agents are strictly easier than physical ones.<p>Compounding on that, mismatches between the simulated dynamics and real dynamics make the problem harder (sim2real problem). Although with domain randomization and online corrections (control loop, search) this is less of an issue these days.<p>Multi-scale effects are also tricky: the characteristic temporal length scale for many actions in robotics can be quite different from the temporal scale of the task (e.g. manipulating ingredients to cook a meal). Locomotion was solved first because it's periodic imo.<p>Check out PufferAI if you're scale-pilled for RL: just do RL bigger, better, get the basics right. Check out Physical Intelligence for the same in robotics, with a more imitation/offline RL feel.</p>
]]></description><pubDate>Tue, 05 Aug 2025 16:58:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44800702</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=44800702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44800702</guid></item><item><title><![CDATA[Can Tinygrad Win?]]></title><description><![CDATA[
<p>Article URL: <a href="https://geohot.github.io//blog/jekyll/update/2025/07/06/can-tinygrad-win.html">https://geohot.github.io//blog/jekyll/update/2025/07/06/can-tinygrad-win.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44485374">https://news.ycombinator.com/item?id=44485374</a></p>
<p>Points: 16</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 07 Jul 2025 00:12:05 +0000</pubDate><link>https://geohot.github.io//blog/jekyll/update/2025/07/06/can-tinygrad-win.html</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=44485374</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44485374</guid></item><item><title><![CDATA[Oxford Ionics acquired by IonQ for 1B]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.ft.com/content/dde7bac2-cacb-4deb-9223-5bcfe285db15">https://www.ft.com/content/dde7bac2-cacb-4deb-9223-5bcfe285db15</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44248180">https://news.ycombinator.com/item?id=44248180</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 11 Jun 2025 14:44:49 +0000</pubDate><link>https://www.ft.com/content/dde7bac2-cacb-4deb-9223-5bcfe285db15</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=44248180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44248180</guid></item><item><title><![CDATA[New comment by _hark in "Intel sells 51% stake in Altera to private equity firm on a $8.75B valuation"]]></title><description><![CDATA[
<p>If FPGAs are competitive on perf/watt, why aren't they more widespread (other than crap software tooling)?<p>Honestly I've asked different hardware researchers this question and they all seem to give different answers.</p>
]]></description><pubDate>Tue, 15 Apr 2025 15:36:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=43694339</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=43694339</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43694339</guid></item><item><title><![CDATA[New comment by _hark in "Ironwood: The first Google TPU for the age of inference"]]></title><description><![CDATA[
<p>Can anyone comment on where efficiency gains come from these days at the arch level? I.e. not process-node improvements.<p>Are there a few big things, many small things...? I'm curious what fruit are left hanging for fast SIMD matrix multiplication.</p>
]]></description><pubDate>Wed, 09 Apr 2025 14:52:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=43632818</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=43632818</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43632818</guid></item><item><title><![CDATA[New comment by _hark in "What do people see when they're tripping? Analyzing Erowid's trip reports"]]></title><description><![CDATA[
<p>Entropy is not absolute!<p>The entropy of some data is well-defined with respect to a model, but the model choice is free. I.e. different models will assign different entropy to the same data.<p>And how do we choose a model...? Well, formally by minimizing the information needed to describe both the model and data (the sum of model complexity and data entropy under the model) [1]<p>You might argue that's all too information-theoretic and in <i>physics</i> there simply is an objective count of the state-space, a maximum entropy, and so on. Alas, there is not even general consensus on whether there is a locally finite number of degrees of freedom.<p>[1]: <a href="https://en.wikipedia.org/wiki/Minimum_description_length" rel="nofollow">https://en.wikipedia.org/wiki/Minimum_description_length</a></p>
]]></description><pubDate>Sat, 01 Mar 2025 19:18:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=43222634</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=43222634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43222634</guid></item><item><title><![CDATA[New comment by _hark in "Fewer students are enrolling in doctoral degrees"]]></title><description><![CDATA[
<p>Maybe a correction is needed. Academia has become so gamified. It's supposed to be about ideas, truth, beauty. Too many are in it for the prestige, which has ironically made it less prestigious.<p>Very few true eccentrics left.</p>
]]></description><pubDate>Thu, 13 Feb 2025 15:41:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43036978</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=43036978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43036978</guid></item><item><title><![CDATA[The Complexity Dynamics of Grokking]]></title><description><![CDATA[
<p>Article URL: <a href="https://brantondemoss.com/research/grokking/">https://brantondemoss.com/research/grokking/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42434095">https://news.ycombinator.com/item?id=42434095</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 16 Dec 2024 18:59:12 +0000</pubDate><link>https://brantondemoss.com/research/grokking/</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=42434095</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42434095</guid></item><item><title><![CDATA[New comment by _hark in "World Labs: Generate 3D worlds from a single image"]]></title><description><![CDATA[
<p>Interesting. You need some local structure with global coherence. But you want it to be complex, not too regular. Like a Penrose Tiling.</p>
]]></description><pubDate>Tue, 03 Dec 2024 12:43:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=42305532</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=42305532</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42305532</guid></item><item><title><![CDATA[New comment by _hark in "The $5000 Compression Challenge (2001)"]]></title><description><![CDATA[
<p>Hmm. At least it's still fine to define limits of the complexity for infinite strings. That should be unique, e.g.:<p>lim n->\infty K(X|n)/n<p>Possible solutions that come tom mind:<p>1) UTMs are actually too powerful, and we should use a finitary abstraction to have a more sensible measure of complexity for finite strings.<p>2) We might need to define a kind of "relativity of complexity". This is my preferred approach and something I've thought about to some degree. That is, that we want a way of describing the complexity of something <i>relative</i> to our computational resources.</p>
]]></description><pubDate>Mon, 25 Nov 2024 12:31:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=42235704</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=42235704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42235704</guid></item><item><title><![CDATA[New comment by _hark in "The $5000 Compression Challenge (2001)"]]></title><description><![CDATA[
<p>Interesting. I guess then we would only be interested in the normalized complexity of infinite strings, e.g. lim n-> \infty K(X|n)/n where X is an infinite set of numbers (e.g. the decimal expansion of some real number), and K(X|n) is the complexity of the first n of them. This quantity should still be unique w/o reference to the choice of UTM, no?</p>
]]></description><pubDate>Mon, 25 Nov 2024 12:13:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=42235581</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=42235581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42235581</guid></item><item><title><![CDATA[New comment by _hark in "Starlink Direct to Cell"]]></title><description><![CDATA[
<p>This could also be a hardware startup. If only there were some entrepreneur types around...<p>Presumably there's a market for this in other niches, e.g. weather monitoring, defense/border monitoring, etc... The question is whether the juice is worth the squeeze. Where's the really valuable data?</p>
]]></description><pubDate>Sun, 24 Nov 2024 23:26:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=42231535</link><dc:creator>_hark</dc:creator><comments>https://news.ycombinator.com/item?id=42231535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42231535</guid></item></channel></rss>