<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: felipeerias</title><link>https://news.ycombinator.com/user?id=felipeerias</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 05:37:57 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=felipeerias" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by felipeerias in "In Japan, the robot isn't coming for your job; it's filling the one nobody wants"]]></title><description><![CDATA[
<p>The US implemented severe immigration restrictions in the 1920s that were lifted gradually over the 1950s–1960s.</p>
]]></description><pubDate>Mon, 06 Apr 2026 04:10:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656904</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47656904</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656904</guid></item><item><title><![CDATA[New comment by felipeerias in "In Japan, the robot isn't coming for your job; it's filling the one nobody wants"]]></title><description><![CDATA[
<p>Several European countries have already fallen in this trap. As pensioners comprise an increasingly large fraction of voters, pandering to them becomes far more politically attractive than investing in the future.</p>
]]></description><pubDate>Mon, 06 Apr 2026 04:05:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47656871</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47656871</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47656871</guid></item><item><title><![CDATA[New comment by felipeerias in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>A living brain exists physically, changes over time, and never stops working.<p>A brain cut from its body and frozen its a dead brain.</p>
]]></description><pubDate>Sat, 04 Apr 2026 22:20:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47644118</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47644118</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47644118</guid></item><item><title><![CDATA[New comment by felipeerias in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>A LLM is not intrinsically affected by time. The model rests completely inert until a query comes in, regardless of whether that happens once per second, per minute, or per day. The model is not even aware of these gaps unless that information is provided externally.<p>It is like a crystal that shows beautiful colours when you shine a light through it. You can play with different kinds of lights and patterns, or you can put it in a drawer and forget about it: the crystal doesn’t care anyway.</p>
]]></description><pubDate>Sat, 04 Apr 2026 22:16:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47644077</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47644077</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47644077</guid></item><item><title><![CDATA[New comment by felipeerias in "Emotion concepts and their function in a large language model"]]></title><description><![CDATA[
<p>LLMs are disembodied and exist outside of time.<p>Bundle of tokens comes in, bundle of tokens comes out. If there is any trace of consciousness or subjectivity in there, it exists only while matrices are being multiplied.</p>
]]></description><pubDate>Sat, 04 Apr 2026 10:44:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47637855</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47637855</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47637855</guid></item><item><title><![CDATA[New comment by felipeerias in "Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw"]]></title><description><![CDATA[
<p>If they bundled together these two radically different usage patterns, either the service would become more expensive or the limits would become a lot tighter, in both cases making Claude Code far less attractive to professional users.</p>
]]></description><pubDate>Sat, 04 Apr 2026 01:35:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47634685</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47634685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47634685</guid></item><item><title><![CDATA[New comment by felipeerias in "Coding agents could make free software matter again"]]></title><description><![CDATA[
<p>One could carefully calculate exactly how much a given document in the training set has influenced the LLM's weights involved in a particular response.<p>However, that number would typically be very very very very small, making it hard to argue that the whole model is a derivative of that one individual document.<p>Nevertheless, a similar approach might work if you took a FOSS project as a whole, e.g. "the model knows a lot about the Linux kernel because it has been trained on its source code".<p>However, it is still not clear that this would be necessarily unlawful or make the LLM output a derivative work in all cases.<p>It seems to me that LLMs are trained on large FOSS projects as a way to teach them generalisable development skills, with the side effect of learning a lot about those particular projects.<p>So if I used a LLM to contribute to the kernel, clearly it would be drawing on information acquired during its training on the kernel's code source. Perhaps it could be argued that the output in that case would be a derivative?<p>But if I used a LLM to write a completely unrelated piece of software, the kernel training set would be contributing a lot less to the output.</p>
]]></description><pubDate>Mon, 30 Mar 2026 03:57:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=47570231</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47570231</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47570231</guid></item><item><title><![CDATA[New comment by felipeerias in "Launching the Claude Partner Network"]]></title><description><![CDATA[
<p>The obvious solution is for Anthropic et al. to certify the skills of each user:<p>> “Good at explaining requirements, needs handholding to understand complex algorithms, picky with the wording of comments, slightly higher than average number of tokens per feature.”<p>I’m not saying this would be good at all, but the data (/insights) and the opportunity are clearly there.</p>
]]></description><pubDate>Sun, 15 Mar 2026 05:24:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47384585</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47384585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47384585</guid></item><item><title><![CDATA[New comment by felipeerias in "Preliminary data from a longitudinal AI impact study"]]></title><description><![CDATA[
<p>Planning might end up being more reliable thanks to coding agents: if you want to estimate how long a task would take, just send an agent to do it.<p>If the agent comes back in a few minutes with a tiny fix, it is probably a small task.<p>If the agent produces a large, convoluted solution that would need careful review, it is at least a medium task.<p>And if the agent gets stuck, runs into architectural constraints, etc. then it is definitely a hard task.</p>
]]></description><pubDate>Thu, 12 Mar 2026 00:34:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47344652</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47344652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47344652</guid></item><item><title><![CDATA[New comment by felipeerias in "Ask HN: Why there are no actual studies that show AI is more productive?"]]></title><description><![CDATA[
<p>Most people seem to be expecting some kind of quantitative analysis: N developers undertook M tasks with and without access to a given AI tool, here is the statistical evidence that shows (or fails to show) the effect, and this result is valid across other projects and tools.<p>In practice, arriving at this ideal scenario can be very challenging. Actually feasible experiments will be necessarily narrow, with the expectation that their results can be (roughly) extrapolated outside of their specific experimental setup.<p>Another valid approach would be to carry out qualitative research, for example a case study. This typically  requires the study of one (or a few) developers and their specific contexts in great detail. The idea is that a deep understanding of how one person navigates their work and their tools would provide us with insights that might be related to our specific situation.<p>Personally, in this particular area, I tend to prefer detailed qualitative accounts of how other developers are working on similar projects and with similar tools as me.<p>But in any case, both approaches are valid and complementary.</p>
]]></description><pubDate>Sun, 08 Mar 2026 11:02:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47296343</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47296343</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47296343</guid></item><item><title><![CDATA[New comment by felipeerias in "Where things stand with the Department of War"]]></title><description><![CDATA[
<p>As someone looking at this from outside the US, the whole sequence of events is frankly terrifying.<p>I fear that frontier AI is going to be nationalised for military purposes, not just in the US but across the globe.<p>At the same time, I really don’t know what Anthropic were expecting when they described their technology as potentially more dangerous than an atom bomb while agreeing to integrate purpose-built models with Palantir to be deployed in high-security networks for classified military tasks.</p>
]]></description><pubDate>Fri, 06 Mar 2026 03:12:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47270382</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47270382</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47270382</guid></item><item><title><![CDATA[New comment by felipeerias in "Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies’"]]></title><description><![CDATA[
<p>It’s a bit more complex than that, but to be fair I don’t know what they were expecting after they integrated a purpose-built model with Palantir to be deployed in high-security networks to carry out classified tasks.</p>
]]></description><pubDate>Thu, 05 Mar 2026 07:18:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47258602</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47258602</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47258602</guid></item><item><title><![CDATA[New comment by felipeerias in "Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies’"]]></title><description><![CDATA[
<p>Reportedly, Anthropic didn't know about Claude's role in capturing Maduro until they saw it on the headlines.</p>
]]></description><pubDate>Thu, 05 Mar 2026 02:53:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47256907</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47256907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47256907</guid></item><item><title><![CDATA[New comment by felipeerias in "Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies’"]]></title><description><![CDATA[
<p>Are you sure about that? Every information I’ve seen suggests that the DoD has been using Anthropic’s models <i>through</i> Palantir.<p>My understanding is that Anthropic requested visibility and a say into how their models were being used for classified tasks, while the DoD wanted to expand the scope of those tasks into areas that Anthropic found objectionable. Both of those proposals were unacceptable for the other side.</p>
]]></description><pubDate>Thu, 05 Mar 2026 01:55:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47256520</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47256520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47256520</guid></item><item><title><![CDATA[New comment by felipeerias in "zclaw: personal AI assistant in under 888 KB, running on an ESP32"]]></title><description><![CDATA[
<p>If it turns out that there is significant value in everyone having their own personal agent running 24/7, we might end up needing a lot more compute than anticipated.<p>(It’s a big if! I’m not convinced about that myself, but it’s worth considering that possibility.)</p>
]]></description><pubDate>Sun, 22 Feb 2026 01:23:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47107075</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47107075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47107075</guid></item><item><title><![CDATA[New comment by felipeerias in "How did the Maya survive?"]]></title><description><![CDATA[
<p>Civilisations in the Americas were significantly less technologically developed than those in Eurasia. We focus our analysis on the Spanish and Portuguese, but the outcome would not have been much different had their place been taken by the Ottoman or the Chinese.<p>The Mayan and the Aztecs were roughly at a similar level of development as ancient Sumer or Babylon: good agricultural practices, irrigation, astronomy, elaborated culture, rich mythologies, very basic metallurgy, early state structures, etc.<p>Sumer and Babylon were great civilisations whose legacy can still be traced today. The same is true for the Maya and the Aztec. Had you visited any of them in their prime, you would have been awed by their skill and sophistication.<p>And yet, think of everything that happened in Eurasia between Hammurabi and Columbus, and you will get a sense of how wide the gap was when the two worlds met.</p>
]]></description><pubDate>Fri, 13 Feb 2026 19:24:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47006638</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=47006638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47006638</guid></item><item><title><![CDATA[New comment by felipeerias in "The Singularity will occur on a Tuesday"]]></title><description><![CDATA[
<p>"Here's the thing nobody tells you", "here's the part that should unsettle you"…</p>
]]></description><pubDate>Wed, 11 Feb 2026 16:25:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46976964</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=46976964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46976964</guid></item><item><title><![CDATA[New comment by felipeerias in "TikTok is officially US-owned for American users, here's what's changing"]]></title><description><![CDATA[
<p>I am European and live in Japan.<p>China is currently providing weapons for Russia to wage large-scale war in Europe, while supporting a dictatorship in North Korea that regularly launches nuclear-capable misiles in my general direction.</p>
]]></description><pubDate>Sun, 25 Jan 2026 06:36:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46751392</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=46751392</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46751392</guid></item><item><title><![CDATA[New comment by felipeerias in "Wilson Lin on FastRender: a browser built by parallel agents"]]></title><description><![CDATA[
<p>If you were looking for a good long-term AI benchmark, “build me a Web browser” should last you for a while.</p>
]]></description><pubDate>Sat, 24 Jan 2026 10:53:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=46742531</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=46742531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46742531</guid></item><item><title><![CDATA[New comment by felipeerias in "The recurring dream of replacing developers"]]></title><description><![CDATA[
<p>Does that automatically translate into more openings for the people whose full time job is providing that thing? I’m not sure that it does.<p>Historically, it would seem that often lowering the amount of people needed to produce a good is precisely what makes it cheaper.<p>So it’s not hard to imagine a world where AI tools make expert software developers significantly more productive while enabling other workers to use their own little programs and automations on their own jobs.<p>In such a world, the number of “lines of code” being used would be much greater that today.<p>But it is not clear to me that the amount of people working full time as “software developers“ would be larger as well.</p>
]]></description><pubDate>Sat, 17 Jan 2026 22:46:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46662866</link><dc:creator>felipeerias</dc:creator><comments>https://news.ycombinator.com/item?id=46662866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46662866</guid></item></channel></rss>