<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: matricks</title><link>https://news.ycombinator.com/user?id=matricks</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 10 Apr 2026 10:16:33 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=matricks" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by matricks in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>> can’t really move beyond their training data<p>I don’t even think humans can “move beyond” their sensory data. They generalize using it, which is amazing, but they are still limited by it.* So why is this a reasonable standard for non-biological intelligence?<p>We have compelling evidence that both can learn in unsupervised settings. (I grant one has to wrap a transformer model with a training harness, but how can anyone sincerely consider this as a disqualifier while admitting that an infant cannot raise itself from birth!)<p>I’m happy to discuss nuance like different architectures (carbon versus silicon, neurons versus ANNs, etc), but the human tendency to move the goalposts is not something to be proud of. We really need to stop doing this.<p>* Jeff Hawkins describes the brain as relentlessly searching for invariants from its sensory data. It finds patterns in them and generalizes.</p>
]]></description><pubDate>Sun, 08 Mar 2026 18:09:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299516</link><dc:creator>matricks</dc:creator><comments>https://news.ycombinator.com/item?id=47299516</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299516</guid></item><item><title><![CDATA[New comment by matricks in "The changing goalposts of AGI and timelines"]]></title><description><![CDATA[
<p>It depends.<p>SoTA models are at least very close to AGI when it comes to textual and still image inputs for most domains. In many domains, SoTA AI is superhuman both in time and speed. (Not wrt energy efficiency.*)<p>AI SoTA for video is not at AGI level, clearly.<p>Many people distinguish intelligence from memory. With this in mind, I think one can argue we’ve reached AGI in terms of “intelligence”; we just haven’t paired it up with enough memory yet.<p>* Humans have a really compelling advantage in terms of efficiency; brains need something like 20W. But AGI as a threshold has nothing directly to do with power efficiency, does it?</p>
]]></description><pubDate>Sun, 08 Mar 2026 17:57:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299414</link><dc:creator>matricks</dc:creator><comments>https://news.ycombinator.com/item?id=47299414</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299414</guid></item><item><title><![CDATA[New comment by matricks in "LLM Writing Tropes.md"]]></title><description><![CDATA[
<p>> There's precious little training material left that isn't generated by LLMs themselves.<p>Percentage-wise this is quite exaggerated.<p>> Consider this to be model collapse (i.e. we might be at the best SOTA possible with the approach we use today - any further training is going to degrade it).<p>You consider this above factor to lead to model collapse? You’ve only mentioned one factor here; this isn’t enough. I’m aware of the GIGO factor, yes. Still there are at least ~5 other key factors needed to make a halfway decent scaling prediction.<p>It is worth mentioning one outside view here: any one human technology tends to advance as long as there are incentives and/or enthusiasts that push it. I don’t usually bet against motivated humans eventually getting somewhere, provided they aren’t trying to exceed the actual laws of physics. There are bets I find interesting: future scenarios, rates of change, technological interactions, and new discoveries.<p>Here are two predictions I have high uncertainty about. First, the transformer as an architectural construct will NOT be tossed out within the next five years because something better at the same level is found. Second, SoTA AI performance advances probably due to better fine-tuning training methods, hybrid architectures, and agent workflows.</p>
]]></description><pubDate>Sun, 08 Mar 2026 17:37:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47299217</link><dc:creator>matricks</dc:creator><comments>https://news.ycombinator.com/item?id=47299217</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47299217</guid></item><item><title><![CDATA[New comment by matricks in "Penpot, Open Source Figma alternative, raises $8M in funding"]]></title><description><![CDATA[
<p>Calling such a concept a 'layer' is quite confusing, given my experience with Photoshop, OmniGraffle, Pixelmator, and most other drawing tools I've seen. Why not just call it an "element" or "shape" or "item"?</p>
]]></description><pubDate>Thu, 29 Sep 2022 00:11:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=33014802</link><dc:creator>matricks</dc:creator><comments>https://news.ycombinator.com/item?id=33014802</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33014802</guid></item><item><title><![CDATA[New comment by matricks in "Ask HN: Explaining many short tenures due to poor mental health"]]></title><description><![CDATA[
<p>First, you are not alone. Seek out support of all kinds. Don't blame "yourself" ... remember that the very idea of "self" is always changing and only a small part of it is under short term conscious control. Mental conditions can be very hard in certain environments, but more manageable in others.<p>Never forget that such conditions also bring advantages! Not the least of which is empathy.<p>I also have ADHD and anxiety, plus a history of not effectively managing my disappointment when things at work seem batshit crazy. It has taken a long time to recognize that a significant level of organizational dysfunction is very common.<p>To answer your question: I don't have any ironclad answers. You can gather ideas like you are doing and try experimenting.<p>Try organizing (grouping) your resume in different ways -- by topic or skill.<p>You can put the year of the job instead of the range in months. Perhaps even leave out the date.</p>
]]></description><pubDate>Wed, 28 Sep 2022 11:48:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=33006834</link><dc:creator>matricks</dc:creator><comments>https://news.ycombinator.com/item?id=33006834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33006834</guid></item></channel></rss>