<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Byamarro</title><link>https://news.ycombinator.com/user?id=Byamarro</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 21 Apr 2026 04:52:14 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Byamarro" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Byamarro in "Cursor 3"]]></title><description><![CDATA[
<p>How does Zed compare to let's say vsc</p>
]]></description><pubDate>Fri, 03 Apr 2026 09:09:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=47624609</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=47624609</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47624609</guid></item><item><title><![CDATA[New comment by Byamarro in "Mathematicians disagree on the essential structure of the complex numbers (2024)"]]></title><description><![CDATA[
<p>All of logic and math is a convincence tool. There are no, circles, quantities. Reality just is. We created these tools because they're a convinent way to cope with complexity of reality. There are no "objects" in a sense that chair is just atoms arranged chair-like. And atoms are just smaller particles arranged atom-like and yet physics operate in these objects treating them as something that exist.<p>So, now we have created these mental tools called mathematics that are heavily constrained. Then we create models that are approximately map 1:1 to some patterns that exist in reality (IE patterns that are roughly local so that we can call them objects). Due to the fact that our mental tools have heavy constrains and that we iteratively adjust these models to fit reality at focal points, we can approximately predict reality, because we already mapped the constrains into the model. But we shouldn't mistake model for the reality. Map is not territory.</p>
]]></description><pubDate>Wed, 11 Feb 2026 11:21:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=46973604</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=46973604</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46973604</guid></item><item><title><![CDATA[New comment by Byamarro in "AI will make formal verification go mainstream"]]></title><description><![CDATA[
<p>I blame syntax. It's too unorthodox nowadays. Historical reasons don't matter all that much, everything mainstream is a C-family memember</p>
]]></description><pubDate>Wed, 17 Dec 2025 09:34:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46299923</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=46299923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46299923</guid></item><item><title><![CDATA[New comment by Byamarro in "AI will make formal verification go mainstream"]]></title><description><![CDATA[
<p>In fact, automated regression tests done by ai with visual capabilities may have bigger impact than formal verification has. You can have an army of testers now, painfully going through every corner of your software</p>
]]></description><pubDate>Wed, 17 Dec 2025 09:30:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46299900</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=46299900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46299900</guid></item><item><title><![CDATA[New comment by Byamarro in "ChatGPT's Atlas: The Browser That's Anti-Web"]]></title><description><![CDATA[
<p>A hybrid will likely emerge. I work on a chat application and it's pretty normal that LLM can print custom ui as part of the chat. Things like sliders, dials, selects, calendars are just better as a GUI in certain situations.<p>I've once saw a demo of an AI photo editing app that displays sliders next to light sources on a photo and you are able to dim/brighten the individual light sources intensity this way. This feels to me like a next level of the user interface.</p>
]]></description><pubDate>Wed, 29 Oct 2025 13:13:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45746412</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=45746412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45746412</guid></item><item><title><![CDATA[New comment by Byamarro in "EuroLLM: LLM made in Europe built to support all 24 official EU languages"]]></title><description><![CDATA[
<p>There's actually a research showing that llms are more accurate when questions are in Polish: <a href="https://arxiv.org/pdf/2503.01996" rel="nofollow">https://arxiv.org/pdf/2503.01996</a></p>
]]></description><pubDate>Tue, 28 Oct 2025 19:18:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45737591</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=45737591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45737591</guid></item><item><title><![CDATA[New comment by Byamarro in "Intercellular communication in the brain through a dendritic nanotubular network"]]></title><description><![CDATA[
<p>I think we can do better than to have this level of argumentation. Regardless if the pretending comment had a merit to it or not</p>
]]></description><pubDate>Fri, 17 Oct 2025 19:37:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=45621071</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=45621071</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45621071</guid></item><item><title><![CDATA[New comment by Byamarro in "Intercellular communication in the brain through a dendritic nanotubular network"]]></title><description><![CDATA[
<p>What he refers to is more specifically called phenomological consciousness afaik (just skimmed through tho)</p>
]]></description><pubDate>Fri, 17 Oct 2025 19:35:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=45621055</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=45621055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45621055</guid></item><item><title><![CDATA[New comment by Byamarro in "The Unknotting Number Is Not Additive"]]></title><description><![CDATA[
<p>Math is about creating mental models.<p>Sometimes we want to model something in real life and try to use math for this - this is physics.<p>But even then, the model is not real, it's a model (not even a 1:1 one on top of that). It usually tries to capture some cherry picked traits of reality i.e. when will a planet be in 60 days ignoring all its "atoms"[1]. That's because we want to have some predictive power and we can't simulate whole reality. Wolfram calls these selective traits that can be calculated without calculating everything else "pockets of reducability". Do they exist? Imho no, planets don't fundamentally exist, they're mental constructs we've created for a group of particles so that our brains won't explode. If planets don't exist, so do their position etc.<p>The things about models is that they're usually simplifications of the thing they model, with only the parts of it that interest us.<p>Modeling is so natural for us that we often fail to realize that we're projecting. We're projecting content of our minds onto reality and then we start to ask questions out of confusion such as "does my mind concept exist". Your mind concept is a neutral pattern in your mind, that's it.<p>[1] atoms are mental concepts as well ofc</p>
]]></description><pubDate>Thu, 09 Oct 2025 08:49:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45525142</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=45525142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45525142</guid></item><item><title><![CDATA[New comment by Byamarro in "I built ChatGPT with Minecraft redstone [video]"]]></title><description><![CDATA[
<p>It is a bit of a clickbait since they used commandblocks, not just redstone. But it's still impressive</p>
]]></description><pubDate>Thu, 02 Oct 2025 08:55:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=45447556</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=45447556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45447556</guid></item><item><title><![CDATA[New comment by Byamarro in "Next.js is infuriating"]]></title><description><![CDATA[
<p>What I've found is that NuxtJS is miles ahead in DX. In NextJs it feels like their architecture stands in your way while in NuxtJS everything just works.</p>
]]></description><pubDate>Tue, 02 Sep 2025 09:50:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45100910</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=45100910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45100910</guid></item><item><title><![CDATA[New comment by Byamarro in "GPT-5"]]></title><description><![CDATA[
<p>It should be almost obligatory to always state which definition of consciousness one is talking about whenever they talk about consiousness, because I for example don't see what language has to do with our ability to experience qualia for example.<p>Is it self awarness? There are animals that can recognize themselves in mirror, I don't think all of them have a form of proto-language.</p>
]]></description><pubDate>Fri, 08 Aug 2025 14:26:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=44837377</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=44837377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44837377</guid></item><item><title><![CDATA[New comment by Byamarro in "GPT-5 for Developers"]]></title><description><![CDATA[
<p>More of a question is its context rot tendency than the size of its context :)
LLMs are supposed to load 3 bibles into their context, but they forget what they were about to do after loading a 600LoC of locales.</p>
]]></description><pubDate>Thu, 07 Aug 2025 18:39:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44828640</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=44828640</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44828640</guid></item><item><title><![CDATA[New comment by Byamarro in "Everything You Need to Know About Grok 4"]]></title><description><![CDATA[
<p>That's langchain terminology. LLMs usually are exposed to a set of tools. It's usually pretty obvious which are obvious, since there's only one tool that's even remotely associated with the task at hand.</p>
]]></description><pubDate>Fri, 18 Jul 2025 19:23:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=44608769</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=44608769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44608769</guid></item><item><title><![CDATA[New comment by Byamarro in "Andrej Karpathy: Software in the era of AI [video]"]]></title><description><![CDATA[
<p>I work in web dev, so people sometimes hook code formatting as a git commit hook or sometimes even upon file save. 
The tests are problematic tho. If you work at huge project it's a no go idea at all. If you work at medium then the tests are long enough to block you, but short enough for you not to be able to focus on anything else in the meantime.</p>
]]></description><pubDate>Thu, 19 Jun 2025 09:51:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44317032</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=44317032</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44317032</guid></item><item><title><![CDATA[New comment by Byamarro in "The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]"]]></title><description><![CDATA[
<p>And we have a good example of a dimwitted, brute-force process creating intelligent designs - evolution.</p>
]]></description><pubDate>Sat, 07 Jun 2025 07:12:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44208011</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=44208011</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44208011</guid></item><item><title><![CDATA[New comment by Byamarro in "Hyper Typing"]]></title><description><![CDATA[
<p>Fully agree. TS shouldn't throw internal types into the user's face. The errors UX is abyssymal.
The fact that the most popular editor - VSC - doesn't preserve line breaks makes reading TS errors even worse. I highly recommend the pretty TS errors extension, otherwise you're only hurting yourself working with TS.</p>
]]></description><pubDate>Mon, 19 May 2025 12:03:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44028900</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=44028900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44028900</guid></item><item><title><![CDATA[New comment by Byamarro in "Hyper Typing"]]></title><description><![CDATA[
<p>TS doesn't support higher order types. You can't return a generic and pass its parameters later. It's basically only a single level of parametrization.<p>```
type MyGeneric<TParam> = ...;
type HigherOrderGeneric<TParam> = TParam extends string ? MyGeneric : never;<p>type Hey = HigherOrderGeneric<string><number>;<p>```<p>There are libraries that try to achieve this through some hacks, tho their ergonomics are really bad.</p>
]]></description><pubDate>Mon, 19 May 2025 11:56:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=44028847</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=44028847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44028847</guid></item><item><title><![CDATA[New comment by Byamarro in "Meta AI App built with Llama 4"]]></title><description><![CDATA[
<p>Probably there are many people like me - I use messenger and not the Facebook. But they REALLY want ppl like me to use the Facebook.</p>
]]></description><pubDate>Tue, 29 Apr 2025 15:29:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43834000</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=43834000</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43834000</guid></item><item><title><![CDATA[New comment by Byamarro in "A 10x Faster TypeScript"]]></title><description><![CDATA[
<p>I think that early in development you should be able to spam a lot of hypothesis and quickly test them and check how people interact with your software. Whether your software makes sense is more important than whether it's fast.<p>People are also highly unpredictable, so it is usually a matter of trial and error, very often their feedback may completely erase wide sets of assumptions you were building your product around.<p>It's borderline impossible to do it on mature product, but rewriting mature product to something faster is not borderline impossible - it's just very hard.<p>Note that it doesn't apply if you just program something in accordance from an rfc where everything is predefined.</p>
]]></description><pubDate>Wed, 12 Mar 2025 14:34:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43343706</link><dc:creator>Byamarro</dc:creator><comments>https://news.ycombinator.com/item?id=43343706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43343706</guid></item></channel></rss>