<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: luisml77</title><link>https://news.ycombinator.com/user?id=luisml77</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 20:18:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=luisml77" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by luisml77 in "Cognition Releases SWE-1.5: Near-SOTA Coding Performance at 950 tok/s"]]></title><description><![CDATA[
<p>I mean I was going to say that was ridiculous but now that I think about it more, its possible that the models can be trained to say spy on government data by calling a tool to send the information to China. And some RL might not wipe off that behavior.<p>I doubt current models from China are trained to do smart spying / injecting sneaky tool calls. But based on my Deep Learning experience with the models both training and inference, it's definitely possible to train a model to do this in a very subtle and hard to detect way...<p>So your point is valid and I think they should specify the base model for security concerns, or conduct safety evaluations on it before passing it to sensitive customers</p>
]]></description><pubDate>Fri, 31 Oct 2025 10:24:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=45770393</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45770393</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45770393</guid></item><item><title><![CDATA[New comment by luisml77 in "A definition of AGI"]]></title><description><![CDATA[
<p>Yes, exactly! Finally someone who understands this.</p>
]]></description><pubDate>Thu, 30 Oct 2025 20:04:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=45764670</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45764670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45764670</guid></item><item><title><![CDATA[New comment by luisml77 in "A definition of AGI"]]></title><description><![CDATA[
<p>I am prepared and want to discuss seriously every one of my view points. The initial comment was just the abstract. I am extremely confident in my world view about Deep Learning and cognitive ability. And the reason for that is because I generally try to avoid doing what you just did, that is, reading what other people think regarding this subject. 
I instead choose to ground my views in real world experiments and information I have gathered and experienced. This primarily consists of an enormous amount of experimentation with Deep Learning models, both inference and training. My views come mostly from that. I don't recite Andrej Karpathy or Ilya Sutskever. I don't even care about their opinions for the most part. I experiment with the models to such an extreme degree that I understand very well how they behave and their limitations. And I believe if you are going to create a breakthrough, this is the only way to do so.<p>> an LLM or any other computational neural net is no different from a program like that<p>I don't think so. A program doesn't exhibit highly complex abstract thought in a very high-dimensional space.<p>> It’s executing deterministic instructions, like a machine, because it is a machine<p>Its true that LLMs are deterministic. But do you really think that the magic behind the brain is only due to temperature and randomness? Do you really think that non-deterministic behavior is the magic ingredient that makes up what we are referring to as consciousness?
I could inject noise into an LLM at every parameter dynamically during inference. The output would come out just fine. After all, LLMs are high dimensional and can handle a little noise. Would it really be more conscious after that? You can find experiments where people remove entire layers of the LLM and it still works fine. A little noise would be even less harmful than that.<p>You see, when I'm arguing, I'm not citing what some other person said. I at most will cite <i>experiments</i> from other people and their results. When I am contradicting your arguments, I present you a reality you can go and try in the real world and verify. You can go verify yourself LLMs exhibit complex high-dimensional thought. You can verify yourself that if you inject noise dynamically through inference on every parameter, you still get coherent output from the LLM.<p>So, if you are willing to continue this discussion, I ask of you that you present some sort of "probing" of the real world and the respective "reaction" of the same real world as arguments. That is what finding the truth means.<p>And lastly. I am presenting a Theory. This means I believe that my points form a foundation that makes my theory stronger than yours. It means I have better evidence that backs it up. It doesn't mean I have proved what consciousness is. Instead, it primarily means I can make more accurate predictions using my theory on real world scenarios involving artificial and biological neural networks. And my personal experience shows me that is true.</p>
]]></description><pubDate>Thu, 30 Oct 2025 19:30:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45764253</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45764253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45764253</guid></item><item><title><![CDATA[New comment by luisml77 in "Cognition Releases SWE-1.5: Near-SOTA Coding Performance at 950 tok/s"]]></title><description><![CDATA[
<p>> you would have just trained a new model instead of fine tuning<p>As if it doesn't cost tens of millions to pre-train a model. Not to mention the time it takes. Do you want them to stall progress for no good reason?</p>
]]></description><pubDate>Thu, 30 Oct 2025 18:24:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=45763342</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45763342</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45763342</guid></item><item><title><![CDATA[New comment by luisml77 in "Keep Android Open"]]></title><description><![CDATA[
<p>This is the problem with this Hacker News platform. Who is downvoting me instead of discussing my points?<p>This platform has the EXACT same problem as Reddit. People can just silence you before you had a chance of discussion. What a waste of fucking time. Instead of improving our world models of reality by having discussions, you can just silence others because you disagree. 
Remove the fucking downvote button! Just remove it, jesus fucking christ. Who thought this button was a good fucking idea?<p>I'm nearly out of this garbage. The same way I left Reddit long ago. X is the only platform that allows free speech.</p>
]]></description><pubDate>Wed, 29 Oct 2025 16:35:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45749284</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45749284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45749284</guid></item><item><title><![CDATA[New comment by luisml77 in "Keep Android Open"]]></title><description><![CDATA[
<p>My point still remains, none of these projects require tens of thousands of paid developers to exist. They also don't provide nearly as much value as Android does.  Billions of smartphones use Android. Linux is not even used by regular people. And  its precisely because it didn't have the same level of development MacOS and Windows had with many orders of magnitude more PAID engineers working on those</p>
]]></description><pubDate>Wed, 29 Oct 2025 16:07:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45748790</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45748790</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45748790</guid></item><item><title><![CDATA[New comment by luisml77 in "Keep Android Open"]]></title><description><![CDATA[
<p>> they threatened to choke all competition and trap and rent-seek the entire world<p>They did so legally and didn't break any rules. This is the game of capitalism, and the fact is, IOS and Android are extremely well built and developed, and no open-source project would ever come close to the hundreds of thousands of paid engineers that built IOS and Android.<p>You can either have capitalism and IOS and Android, or you can have communism and a society that is 10+ years behind in development. Do you really want to give up IOS 26 for a blackberry?</p>
]]></description><pubDate>Wed, 29 Oct 2025 15:40:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45748314</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45748314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45748314</guid></item><item><title><![CDATA[New comment by luisml77 in "Keep Android Open"]]></title><description><![CDATA[
<p>Because everyone in this comment section is arguing that Android should be open-source and detached from Google. I'm saying some things are simply too big to be built by the community.<p>The developers need to get paid. And the developers only get paid if the system is closed-source such that the revenue can only flow back to Google which is where the developers are hired at. In other words, yes it needs to be centralized, and the reason is the money required to build Android is just too much  and therefore needs to be developed under a for-profit capitalist organization like Google.</p>
]]></description><pubDate>Wed, 29 Oct 2025 15:29:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=45748148</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45748148</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45748148</guid></item><item><title><![CDATA[New comment by luisml77 in "Keep Android Open"]]></title><description><![CDATA[
<p>Linux, even though you may think is a massive project and you may be right in some regards, doesn't require massive amounts of capital, human resources and paid developers, etc. to build it.<p>Android on the other hand is developed by thousands of engineers and is a much larger project in terms of monetary investment than Linux. Linux was essentially built by a single guy. Android could never have been built by a single person or even a open-source project. It's too massive.<p>However complex you think Linux is, its just a kernel and doesn't require a conglomerate to build and maintain for billions of users. Android does, and those developers need to get paid for the massive value they provide.</p>
]]></description><pubDate>Wed, 29 Oct 2025 15:20:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45748008</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45748008</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45748008</guid></item><item><title><![CDATA[New comment by luisml77 in "1X Neo – Home Robot - Pre Order"]]></title><description><![CDATA[
<p>`The devil is in the details. Its completely different if one user has a custom trained model versus the whole user base shares a custom trained model. You have to overthink about these things carefully, otherwise you don't reach AGI.</p>
]]></description><pubDate>Wed, 29 Oct 2025 15:09:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=45747835</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45747835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45747835</guid></item><item><title><![CDATA[New comment by luisml77 in "Keep Android Open"]]></title><description><![CDATA[
<p>The discussion between open-source and closed-source is essentially a discussion between communism and capitalism.<p>Anything that reaches a certain threshold of value to society and requires enormous effort to build and maintain has to fall back to a capitalist, for-profit, closed-source structure. That's all that's happening here.<p>Of course, small stuff like a software library that doesn't require much effort to build and doesn't provide much value can remain open-source. I personally think this obsession with open-source software is simply an obsession with communism and getting things for free, and not wanting getting rewarded for the value of the stuff you build, etc.</p>
]]></description><pubDate>Wed, 29 Oct 2025 15:03:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=45747762</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45747762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45747762</guid></item><item><title><![CDATA[New comment by luisml77 in "1X Neo – Home Robot - Pre Order"]]></title><description><![CDATA[
<p>I don't think its a training session. Current AI models are pre-trained before deployment for inference. After the model is trained, they load it into the robots computer, and it runs inference with that model. 
You can't train the model again because you don't have enough memory on the robot, but also even if you did its slow and consumes energy. You could have it train in some server but then every new skill would require you to pay the equivalent price for renting a bunch of GPUs for many hours.<p>What they can do is, for everyone, have a base model, and then improve it over time. Then, with software updates they can improve the set of skills the robot can handle out of the box.<p>But this is the problem with current AI systems, without a continuous learning capability, you're always limited to the "default skills". As soon as you have something out of the box for the robot to do, you end up needing Indians to learn it.<p>All of AI is flawed in this way. LLMs for instance have almost no continuous learning capability, that is why we don't have AGI yet. They can't learn new skills. Therefore, they can't adapt to new jobs they have not seen during training. They can't even play pokemon properly or any complex game for that matter, because games involve learning new skills during gameplay.</p>
]]></description><pubDate>Tue, 28 Oct 2025 20:23:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45738621</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45738621</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45738621</guid></item><item><title><![CDATA[New comment by luisml77 in "A definition of AGI"]]></title><description><![CDATA[
<p>Philosophers are mostly unaware of artificial neural networks. The game has changed, you can understand a lot about the human mind if you understand AI. Don't get too stuck in the past. 
How about an objection to what I said? A case where someone is conscious but without continuous propagation of neural signals? Or something</p>
]]></description><pubDate>Mon, 27 Oct 2025 18:25:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45724555</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45724555</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45724555</guid></item><item><title><![CDATA[New comment by luisml77 in "It's insulting to read AI-generated blog posts"]]></title><description><![CDATA[
<p>Who cares about your feelings, it's a blog post.<p>If the goal is to get the job done, then use AI.<p>Do you really want to waste precious time for so little return?</p>
]]></description><pubDate>Mon, 27 Oct 2025 17:14:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=45723645</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45723645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45723645</guid></item><item><title><![CDATA[New comment by luisml77 in "A definition of AGI"]]></title><description><![CDATA[
<p>Isn't the brain randomly stimulated already? Even not being dead? Don't you think the complex reasoning is a cause of the neurons themselves and not the stimulation? 
Animals are alive and are not nearly as smart. Its because their neural networks are not as deep. Its not for the lack of proper chemistry or stimulation.</p>
]]></description><pubDate>Mon, 27 Oct 2025 12:07:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45720038</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45720038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45720038</guid></item><item><title><![CDATA[New comment by luisml77 in "A definition of AGI"]]></title><description><![CDATA[
<p>Feeling how things function is the art of Deep Learning</p>
]]></description><pubDate>Mon, 27 Oct 2025 12:03:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45720012</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45720012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45720012</guid></item><item><title><![CDATA[New comment by luisml77 in "A definition of AGI"]]></title><description><![CDATA[
<p>Complex output can sometimes give you the wrong idea, I agree. For instance, a study Anthropic did a while back showed that, when an LLM was asked HOW it performed a mathematical computation (35 + 59), the response the LLM gave was different from the mechanistic interpretation of the layers [1]. This showed LLMs can be deceptive. But they are also trained to be deceptive. Supervised fine tuning is imitation learning. This leads the model to learn to be deceptive, or answer what is usually the normal explanation, such as "I sum first 5+9, then add the remainder to... etc". The LLM does this rather than actually examining the past keys and values. But it does not mean it can't examine its past keys and values. These encode the intermediate results of each layer, and can be examined to identify patterns. What Anthropic researchers did was examine how the token for 35 and for 39 was fused together in the layers. They compare these tokens to other tokens, such as 3 , 5 , 9. For an LLM, tokens are high dimensional concepts. This is why you can compare the vectors to each other, and figure out the similarity, and therefore break down the thought process. Yes, this is exactly what I have been discussing above. Underneath each token prediction, this black magic is happening, where the model is fusing concepts through summation of the vectors (attention scores). Then, merged representations are parsed by the MLPs to generate the refined fused idea, often adding new knowledge stored inside the network. And this continues layer after layer. A repeated combination of concepts, that start with first understanding the structure and order of the language itself, and end with manipulation of complex mathematical concepts, almost detached from the original tokens themselves.<p>Even though complex output can be deceptive of the underlying mental model used to produce it, in my personal experience, LLMs have produced for me output that must imply extremely complex internal behaviour, with all the characteristics I mentioned before. Namely, I frequently program with LLMs, and there is simply zero percent probability that their output tokens exist WITHOUT first having thought at a very deep level about the unique problem I presented to them. And I think anyone that has used the models to the level I have, and interacted with them this extensively, knows that behind each token there is this black magic.<p>To summarize, I am not being naive by saying I believe everything my LLM says to me. I rather know very intimately where the LLM is deceiving me and when its producing output where its mental model must have been very advanced to do so. And this is through personal experience playing with this technology, both inference and training.<p>[1] <a href="https://www.anthropic.com/research/tracing-thoughts-language-model" rel="nofollow">https://www.anthropic.com/research/tracing-thoughts-language...</a></p>
]]></description><pubDate>Mon, 27 Oct 2025 11:48:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=45719894</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45719894</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45719894</guid></item><item><title><![CDATA[New comment by luisml77 in "A definition of AGI"]]></title><description><![CDATA[
<p>People like to focus on the differences between the brain and artificial neural networks. I myself believe the only thing that truly matters is that you can form complex functions with the common neuron element. This is achieved via linking lots them together, and by each having a property known as non-linearity. These two things ensure that with neurons you can just about approximate any linear or non-linear function or behaviour. This means you can simulate inside your network pretty much any reality within this universe, its causation and the effects.
The deeper your network the more complex the reality you can "understand". Understand just means simulate and run inputs to get outputs in a way that matches the real phenomenon. When someone is said to be "smart", it means they possess a set of rules and functions that can very accurately predict a reality.
You mention scale, and while its true the number of neuron elements the brain has is larger than any LLM, its also true the brain is more sparse, meaning much less of the neurons are active at the same time. For a more fair comparison, you can also remove the motor cortex from the discussion, and talk just about the networks that reason. I believe the scale is comparable.<p>In essence, I think it doesn't matter that the brain has a whole bunch of chemistry added into it that artificial neural networks don't. The underlying deep non-linear function mapping capability is the same, and I believe this depth is, in both cases, comparable.</p>
]]></description><pubDate>Mon, 27 Oct 2025 09:08:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45718764</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45718764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45718764</guid></item><item><title><![CDATA[New comment by luisml77 in "A definition of AGI"]]></title><description><![CDATA[
<p>Awareness is just continuous propagation of the neural network, be that artificial or biological. The reason thoughts just "appear" is because the brain is continuously propagating signal through the neural network. LLMs also do this during their decoding phase, where they reason continuously with every token that they generate. There is no difference here.
Then you say "we don't think most of the times using language exclusively" , but neither do LLMs. What most people fail  to realise is that in between each token being generated, black magic is happening in between the transformer layers. The same type of magic you describe. High dimensional. Based on complex concepts. Merging of ideas. Fusion of vectors to form a combined concept. Smart compression. Application of abstract rules. An LLM does all of these things, and more, and you can prove this by how complex their output is. Or, you can read studies by Anthropic on interpretability, and how LLMs do math underneath the transformer layers. How they manipulate information.<p>AGI is not here with LLMs, but its not because they lack reasoning ability. It's due to something different. Here is what I think is truly missing: continuous learning, long term memory, and infinite and efficient context/operation. All of these are tied together deeply, and thus I believe we are but a simple breakthrough away from AGI.</p>
]]></description><pubDate>Mon, 27 Oct 2025 08:30:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45718554</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45718554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45718554</guid></item><item><title><![CDATA[New comment by luisml77 in "Claude Code on the web"]]></title><description><![CDATA[
<p>Exactly, I want to go to sleep knowing I have an AI working in a computer developing my project. Then wake up to the finished website/program, fully tested top to bottom backend frontend UI etc.</p>
]]></description><pubDate>Tue, 21 Oct 2025 06:09:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45652943</link><dc:creator>luisml77</dc:creator><comments>https://news.ycombinator.com/item?id=45652943</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45652943</guid></item></channel></rss>