<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: itkovian_</title><link>https://news.ycombinator.com/user?id=itkovian_</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 16 May 2026 08:25:42 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=itkovian_" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by itkovian_ in "The sigmoids won't save you"]]></title><description><![CDATA[
<p>The other thing people don’t understand is exponential curves are self similar. The start of an exponential looks like an exponential. People always look at and think ‘well that’s it it’s exponential now, have missed it, can’t sustain’. Nope.<p>Good example of this is number of submissions to neurips/icml/iclr. In 2017 that curve was exponential.</p>
]]></description><pubDate>Fri, 15 May 2026 17:16:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=48151244</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=48151244</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48151244</guid></item><item><title><![CDATA[New comment by itkovian_ in "Major AI conference flooded with peer reviews written by AI"]]></title><description><![CDATA[
<p>The argument is that there is no incentive to carefully review a paper (I agree), however what used to occur is people would do the right thing without explicit incentives. This has totally disappeared.</p>
]]></description><pubDate>Sat, 29 Nov 2025 17:48:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46089346</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=46089346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46089346</guid></item><item><title><![CDATA[New comment by itkovian_ in "Major AI conference flooded with peer reviews written by AI"]]></title><description><![CDATA[
<p>Whether it’s actually 20% or not doesn’t matter, everyone is aware the signal of the top confs is in freefall.<p>There are also rings of reviewer fraud going on where groups of people in these niche areas all get assigned their own papers and recommend acceptance and in many cases the AC is part of this as well. Am not saying this is common but it is occurring.<p>It feels as if every layer of society is in maximum extraction mode and this is just a single example. No one is spending time to carefully and deeply review a paper because they care and they feel on principal that’s the right thing to do. People did used to do this.</p>
]]></description><pubDate>Sat, 29 Nov 2025 17:43:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46089316</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=46089316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46089316</guid></item><item><title><![CDATA[Node-0-7.5B: A collaborative multi-participant, model-parallel pretrain]]></title><description><![CDATA[
<p>Article URL: <a href="https://dashboard.pluralis.ai/">https://dashboard.pluralis.ai/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45270148">https://news.ycombinator.com/item?id=45270148</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 17 Sep 2025 00:39:50 +0000</pubDate><link>https://dashboard.pluralis.ai/</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=45270148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45270148</guid></item><item><title><![CDATA[New comment by itkovian_ in "Stanford to continue legacy admissions and withdraw from Cal Grants"]]></title><description><![CDATA[
<p>These are some of the richest entities - forget about universities - just entities full stop, in the entire country.</p>
]]></description><pubDate>Sat, 09 Aug 2025 22:28:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44850917</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44850917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44850917</guid></item><item><title><![CDATA[New comment by itkovian_ in "Does the Bitter Lesson Have Limits?"]]></title><description><![CDATA[
<p>I don’t think people understand the point sutton was making; he’s saying that general, simple systems that get better with scale tend to outperform hand engineered systems that don’t. It’s a kind of subtle point that’s implicitly saying hand engineering inhibits scale because it inhibits generality. He is not saying anything about the rate, doesn’t claim llms/gd are the best system, in fact I’d guess he thinks there’s likely an even more general approach that would be better. It’s comparing two classes of approaches not commenting on the merits of particular systems.</p>
]]></description><pubDate>Fri, 01 Aug 2025 22:30:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44763166</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44763166</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44763166</guid></item><item><title><![CDATA[New comment by itkovian_ in "OpenAI raises $8.3B at $300B valuation"]]></title><description><![CDATA[
<p>I’m gonna go ahead and guess they didn’t raise 8.3b on SAFEs</p>
]]></description><pubDate>Fri, 01 Aug 2025 15:52:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=44758615</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44758615</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44758615</guid></item><item><title><![CDATA[New comment by itkovian_ in "Why are we pretending AI is going to take all the jobs?"]]></title><description><![CDATA[
<p>Article doesn’t say jobs aren’t about to be evicerated, says this is already happening and it’s due to capitalism, a lack of consumer protections and we require more government regulation. This never made any sense to me because we don’t have to guess how this would go - the experiment is being run in Europe right now.<p>Also the core of the argument is wrong, ai is clearly displacing jobs this is happening today.</p>
]]></description><pubDate>Thu, 24 Jul 2025 05:11:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=44667163</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44667163</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44667163</guid></item><item><title><![CDATA[New comment by itkovian_ in "LLM Inevitabilism"]]></title><description><![CDATA[
<p>The reason for this is it’s horrifying to consider that things like the Ukrainian war didn’t have to happen. It provides a huge amount of phycological relief to view these events as inevitable. I actually don’t think as humans are even able to conceptualise/internalise suffering on those scales as individuals. I can’t at least.<p>And then ultimately if you believe we have democracies in the west it means we are all individually culpable as well. It’s just a line of logic that becomes extremely distressing and so there’s a huge, natural and probably healthy bias away from thinking like that.</p>
]]></description><pubDate>Wed, 16 Jul 2025 02:47:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44578192</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44578192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44578192</guid></item><item><title><![CDATA[New comment by itkovian_ in "Judge rejects Meta's claim that torrenting is “irrelevant” in AI copyright case"]]></title><description><![CDATA[
<p>I think the better analogy is if you had someone with a superhuman, but not perfect memory read a bunch of stuff, then you were allowed to talk to the person about the things they’d read, does that violate copyright? I’d say clearly no.<p>Then what if their memory is so good, they repeat entire sections verbatim when asked. Does that violate it? I’d say it’s grey.<p>But that’s a very specific case - reproducing large chunks of owned work is something that can be quite easily detected and prevented and I’m almost certain the frontier labs are already going this.<p>So I think it’s just very not clear - the reality is this is a novel situation, the job of the courts is now to basically decide what’s allowed and what’s not. But the rational shouldn’t be ‘this can’t be fair use it’s just compression’. Because it’s clearly something fundamentally different and existing laws just aren’t applicable imo</p>
]]></description><pubDate>Fri, 27 Jun 2025 09:55:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=44395415</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44395415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44395415</guid></item><item><title><![CDATA[New comment by itkovian_ in "Q-learning is not yet scalable"]]></title><description><![CDATA[
<p>Completely agree and think it’s a great summary. To summarize very succinctly; you’re chasing a moving target where the target changes based on how you move. There’s no ground truth to zero in on in value-based RL. You minimise a difference in which both sides of the equation have your APPROXIMATION in them.<p>I don’t think it’s hopeless though, I actually think RL is very close to working because what it lacked this whole time was a reliable world model/forward dynamics function (because then you don’t have to explore, you can plan). And now we’ve got that.</p>
]]></description><pubDate>Sun, 15 Jun 2025 09:49:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=44281452</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44281452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44281452</guid></item><item><title><![CDATA[New comment by itkovian_ in "AGI is not multimodal"]]></title><description><![CDATA[
<p>Saying we should tokenize different modalities the same would be analogous to saying that in order to be really smart, a human has to listen with its eyes. At some point there has to be SOME modality specific preprocessing. The thing is in all current sota arch.’s this modality specific preprocessing is very very shallow, almost trivially shallow. I feel this is the peice of information that may be missing for people with this view. In the multimodal models everything is moving to a shared representation very rapidly - that’s clearly already happening.<p>On the ‘we need to do rl loop rather than a generative model’ point - I’d say this is the consensus position today!</p>
]]></description><pubDate>Thu, 05 Jun 2025 13:50:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44191702</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44191702</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44191702</guid></item><item><title><![CDATA[New comment by itkovian_ in "AGI is not multimodal"]]></title><description><![CDATA[
<p>I don’t want to bash the guy since he’s still in his phd, but it’s written in such a confident tone for something that is so all over the place that I think it’s fair game.<p>Like a lot of the symbolic/embodied people, the issue is they don’t have a deep understanding of how the big models work or are trained, so they come to weird conclusions. Like things that aren’t wrong but make you go ‘ok.. but what you trying to say’.<p>E.g ‘Instead of pre-supposing structure in individual modalities, we should design a setting in which modality-specific processing emerges naturally.’ Seems to lack the understanding that a vision transformer is completely identical for a standard transformer except for the tokenization which is just embedding a grid of patches and adding positional embeddings. Transformers are so general, what he’s asking us to do is exactly what everyone is already doing. Everything is early fusion now too.<p>“The overall promise of scale maximalism is that a Frankenstein AGI can be sewed together using general models of narrow domains.” No one is suggesting this.. everyone wants to do it end to end, and also thinks that’s the most likely thing to work. Some suggestions like lecuns jepa’s do suggest to induce some structure in the arch, but still the driving force there is to allow gradients to flow everywhere.<p>For a lot of the other conclusions, the statements are literally almost equivalent to ‘to build agi, we need to first understand how to build agi’. Zero actionable information content.</p>
]]></description><pubDate>Thu, 05 Jun 2025 12:18:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=44190909</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=44190909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44190909</guid></item><item><title><![CDATA[New comment by itkovian_ in "Ask HN: Any insider takes on Yann LeCun's push against current architectures?"]]></title><description><![CDATA[
<p>And in this categorization auto regressive llms are contrastive due to the cross entropy loss.</p>
]]></description><pubDate>Sat, 15 Mar 2025 00:06:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43368630</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=43368630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43368630</guid></item><item><title><![CDATA[New comment by itkovian_ in "Ask HN: Any insider takes on Yann LeCun's push against current architectures?"]]></title><description><![CDATA[
<p>The fundamental distinction is usually made to contrastive approaches (i.e. make correct more likely, make everything else we just compared unlikely). Ebms are "only what is correct is more likely and the default for everything is unlikely"<p>This is obviously an extremely high level simplification, but that's the core of it.</p>
]]></description><pubDate>Sat, 15 Mar 2025 00:03:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43368609</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=43368609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43368609</guid></item><item><title><![CDATA[New comment by itkovian_ in "Ask HN: Any insider takes on Yann LeCun's push against current architectures?"]]></title><description><![CDATA[
<p>>This is due to the fact that LLMs are basically just giant look up maps with interpolation.<p>This is obviously not true at this point except for the most loose definition of interpolation.<p>>don't rely on things like differentiability.<p>I've never heard lecun say we need to move away from gradient descent. The opposite actually.</p>
]]></description><pubDate>Fri, 14 Mar 2025 23:58:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43368585</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=43368585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43368585</guid></item><item><title><![CDATA[New comment by itkovian_ in "U.S. pauses intelligence sharing with Ukraine used for strikes on Russia"]]></title><description><![CDATA[
<p>I mean theres lots one could say here. Probably the most straightforward is igor danshenko, who was the primary source for the steel dossier, stating that he never intended for the claims to be taken seriously.</p>
]]></description><pubDate>Wed, 05 Mar 2025 15:29:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=43267850</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=43267850</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43267850</guid></item><item><title><![CDATA[New comment by itkovian_ in "U.S. pauses intelligence sharing with Ukraine used for strikes on Russia"]]></title><description><![CDATA[
<p>It's an extraordinary claim. I think the reason I dismiss it as unlikely as when I look back at the steel dossier and muler investigation 1) if there was something, it's very likely they would have found it then 2) in hindsight both investigations were completely discredited and shown to be largely a institutional response to the shock which was 2016. This current re-emergence of 'trump is a Russian agent' is kinda surprising in that context. 3) I think the current behavior can be explained by a desire to end the conflict, while feeling no particular allegiance to Ukraine.</p>
]]></description><pubDate>Wed, 05 Mar 2025 15:21:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43267725</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=43267725</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43267725</guid></item><item><title><![CDATA[New comment by itkovian_ in "ChatGPT Saved My Life (no, seriously, I'm writing this from the ER)"]]></title><description><![CDATA[
<p>All competitive open models today share a common property; someone spent a large amount of money to train them and then released the model for free.<p>I don't understand why the argument continues to be we will have a rich ecosystem of base open source models; unlike opensource ai which is individuals donating time, opensource ai requires someone to donate very large amounts of capital.</p>
]]></description><pubDate>Tue, 25 Feb 2025 16:47:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43174245</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=43174245</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43174245</guid></item><item><title><![CDATA[New comment by itkovian_ in "ChatGPT Saved My Life (no, seriously, I'm writing this from the ER)"]]></title><description><![CDATA[
<p>Story is good example of a why there's no "killer app" for llms. They are the killer app. There's no stack to this tech.<p>People don't want to admit this because of the massive concentration of power that becomes clear after you accept this.</p>
]]></description><pubDate>Tue, 25 Feb 2025 15:11:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43172869</link><dc:creator>itkovian_</dc:creator><comments>https://news.ycombinator.com/item?id=43172869</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43172869</guid></item></channel></rss>