<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: nis251413</title><link>https://news.ycombinator.com/user?id=nis251413</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 08:59:38 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=nis251413" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by nis251413 in "Why does the U.S. always run a trade deficit?"]]></title><description><![CDATA[
<p>> What does this mean really? That is their means.<p>The "means" is being able to print trillions of imaginary dollars with value that magically pops into existence, merely backed by the fact that other nations do not want USD to lose value because they invest in it.<p>US did not care about trade deficits because they were able to just print dollars without having dollar itself devalued, which is what happens if any other nation decides to print currency without having a way to back its value. Because the US status as global hegemony is getting challenged, these trade deficits may become an issue. But trade deficits were not an issue until now (and they still are not as long as they can keep printing dollars without hyperinflation happening).</p>
]]></description><pubDate>Wed, 21 May 2025 11:01:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44050182</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44050182</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44050182</guid></item><item><title><![CDATA[New comment by nis251413 in "Don't guess my language"]]></title><description><![CDATA[
<p>Don’t special characters always go after the Latin alphabet? I think this is pretty common, and fairly expected behaviour. Of course nothing is perfect but I feel like the way Wikipedia handles it is consistent.</p>
]]></description><pubDate>Mon, 19 May 2025 13:22:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44029603</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44029603</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44029603</guid></item><item><title><![CDATA[New comment by nis251413 in "Show HN: Goboscript, text-based programming language, compiles to Scratch"]]></title><description><![CDATA[
<p>Next step: create a visual programming language that compiles to goboscript.</p>
]]></description><pubDate>Mon, 19 May 2025 08:53:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=44027763</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44027763</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44027763</guid></item><item><title><![CDATA[New comment by nis251413 in "Ditching Obsidian and building my own"]]></title><description><![CDATA[
<p>Yeah, syncing text files across devices is a problem that has little to do with obsidian or whichever editor/renderer one uses. As long as one keeps things relatively simple with plugin-related syntax flavours, editors are interchangeable.</p>
]]></description><pubDate>Sun, 18 May 2025 23:16:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44025024</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44025024</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44025024</guid></item><item><title><![CDATA[New comment by nis251413 in "O(n) vs. O(n^2) Startups"]]></title><description><![CDATA[
<p>Big-O notation is about asymptotics. You have to approach something, and typically there is some infinity involved because if it is not, then you can just compute things instead of giving asymptotic approximations, or else you have 10^10*n and 0.001*n^2-10^20*n and the big-O asymptotics at infinity are useless for smaller numbers. I understand what OP tries to say but that's not really a good framing point for many reasons. If you want to talk about a finite period of time, use a regression model, not asymptotics. But that's probably also a more personal preference around using mathematics colloquially but also in a manner that is not a good metaphor and does not correspond with what the mathematical theory is referring to. And I am not sure at all whether it is very well understood "what part of the graph we're talking about", in the sense that the modern organization of economy is far from acknowledging the fact that resources on earth are actually finite. Talking about O(n) and O(n^2) or O(exp(n)) as if growth can be indefinite comes with a specific kind of mindset, and the frame used reflects this type of mindset.</p>
]]></description><pubDate>Sun, 18 May 2025 20:35:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=44024142</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44024142</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44024142</guid></item><item><title><![CDATA[New comment by nis251413 in "O(n) vs. O(n^2) Startups"]]></title><description><![CDATA[
<p>Sigmoid (or the logistic function specifically rather) <i>is</i> exponential until you get close to the "turning point" (or rather, its growth bounded from below by an exponential). It's as you approach that point that it becomes linear, and after that its growth decays.<p>However you are sort of right that "churn" does not necessarily have to do with it being sigmoid because it will be anyway. It may be bring it earlier if the churn rate surpasses the user growth, but that's probably not important here.</p>
]]></description><pubDate>Sun, 18 May 2025 20:13:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=44024004</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44024004</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44024004</guid></item><item><title><![CDATA[New comment by nis251413 in "O(n) vs. O(n^2) Startups"]]></title><description><![CDATA[
<p>Well, you may want to increase complexity in some contexts, eg in cryptography.</p>
]]></description><pubDate>Sun, 18 May 2025 19:20:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44023710</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44023710</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44023710</guid></item><item><title><![CDATA[New comment by nis251413 in "Coding without a laptop: Two weeks with AR glasses and Linux on Android"]]></title><description><![CDATA[
<p>Yeah I also have amblyopia and am curious about this. How does it work with the two lenses/screens? I assume if eg one with normal binocular eyesight closes one eye, they are able to see the screen on the open eye side normally? In some sense I would imagine it should be just minus one problem to solve this way (the vergence accommodation) if one does not care about stereoscopy.<p>I tried once the apple vision pro and it seemed fine, amblyopia-wise at least. It was too briefly though to know for sure how it would be like using it for longer.</p>
]]></description><pubDate>Sun, 18 May 2025 08:43:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=44019921</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44019921</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44019921</guid></item><item><title><![CDATA[New comment by nis251413 in "I don't like NumPy"]]></title><description><![CDATA[
<p>I mean, the article and discussion are about numpy's syntax around vectorised code and problems people have with that. Many comments make comparisons with matlab, and I am pointing that a language allowing you do use arrays and write vectorised code, and a language being an array language are not the same thing. Writing vectorised code in a language were everything is based on arrays is in general more natural. The variable definitions are simpler (you do not specify what is array and what is not because everything is an array), and operations tend to work more consistently. Eg in matlab operations are by default done column-wise, because that's how the language is designed and works internally. So functions acting on 2+D arrays act by default column-wise, it does not depend on other context, and they are designed so in order to be faster, not merely consistent to the user. Consistency comes from how arrays are internally represented in memory and the need to have fast code, not just as an arbitrary design choice at the highest level.<p>Most developers do not touch array languages but I guess most developpers don't in general (need to) vectorise code this way and avoid loops (because they work in other problem domains, use lower level languages etc). If anything, not all problems can be vectorised anyway (or at least elegantly). But if one writes vectorised code, doing that in an array language makes more sense.</p>
]]></description><pubDate>Fri, 16 May 2025 23:18:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=44010654</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44010654</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44010654</guid></item><item><title><![CDATA[New comment by nis251413 in "I don't like NumPy"]]></title><description><![CDATA[
<p>That's nice, but only works, as I understand it, if you use numpy-only functions, which means that you should not use those who denote also base-pythonic, eg +,* etc operations, because then they are interpreted differently. Eg `A + x` gives<p><pre><code>    [[1, 2], [3, 4], [5], [6]] 
</code></pre>
instead of<p><pre><code>    array([[ 6,  7],[ 9, 10]])
</code></pre>
You have to keep track of the context there to know what you can do and what not I guess, which is not ideal.</p>
]]></description><pubDate>Fri, 16 May 2025 22:10:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44010217</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44010217</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44010217</guid></item><item><title><![CDATA[New comment by nis251413 in "I don't like NumPy"]]></title><description><![CDATA[
<p>Not that I know of, and I did not claim it does.</p>
]]></description><pubDate>Fri, 16 May 2025 18:05:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=44008253</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44008253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44008253</guid></item><item><title><![CDATA[New comment by nis251413 in "After months of coding with LLMs, I'm going back to using my brain"]]></title><description><![CDATA[
<p>Even a single person may do different things that will change whether using an LLM helps or not.<p>Much of the time I spend writing code, not thinking about the general overview etc but about the code I am about to write itself, and if I actually care about the actual code (eg I am not gonna throw it away anyway by the end of the day) it is about how to make it as concise and understandable to others (incl future me) as possible, what cases to care about, what choices to make so that my code remain maintainable after a few days. It may be about refactoring previous code and all the decisions that go with that. LLM generated code, imo, is too bloated; them putting stuff like asserts is always a hit or miss about what they will think is important or not. Their comments tend to be completely trivial, instead of stating the intention of stuff, and though I have put some effort in getting them use a coding style similar to mine, they often fail there too. In such cases, I only use them if the code they write can be isolated enough, eg write a straightforward, auxiliary function here and there that will be called in some places but does not matter as much what happens in there. There are just too many decisions at each step that LLMs are not great at resolving ime.<p>I depend more on LLMs if I care less about maintenability of the code itself and more about getting it done as fast as possible, or if I am just exploring and do not actually care about the code at all. For example, it can be I am in a rush to get sth done and care about the rest later (granted they can actually do the task, else I am losing time). But when I tried this for my main work, it soon became a mess that would take more time to fix even if they seem like speeding me up initially. Granted, if my field was different and the languages I was using more popular/represented in training data, I may have found more uses for them, but I still think that after some point it becomes unsustainable to leave decisions to them.</p>
]]></description><pubDate>Fri, 16 May 2025 16:07:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=44007099</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=44007099</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44007099</guid></item><item><title><![CDATA[New comment by nis251413 in "I don't like NumPy"]]></title><description><![CDATA[
<p>The syntax of actual array languages can be beautifully concise and expressive. You can express mathematical formulas in ways that makes sense when you read a single line, and once you get used to the notation and some originally non-intuitive quirks (from a programming background perspective), you can write in very few lines what otherwise would take you several lines of rather ugly, less readable code.<p>In my view, python+numpy is not actually an array language. Numpy as a library adds vectorization operations to python in order to help with speed. This is different. It does not (intend to) bring the advantages that array language syntax has, even if it was a bit more consistent.</p>
]]></description><pubDate>Thu, 15 May 2025 20:15:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43998865</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=43998865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43998865</guid></item><item><title><![CDATA[New comment by nis251413 in "I've acquired a new superpower"]]></title><description><![CDATA[
<p>Yeah that's me. I lack stereoscopic vision so such tricks or 3d glasses etc do not work.</p>
]]></description><pubDate>Fri, 10 Jan 2025 18:57:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=42658779</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=42658779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42658779</guid></item><item><title><![CDATA[New comment by nis251413 in "A messy experiment that changed how I think about AI code analysis"]]></title><description><![CDATA[
<p>It depends also on what you want to get from the article. Usually I focus on the methods section to really understand what the paper did (usually I read experimental papers in cognitive science/neuroscience). I may read parts of the results, but hopefully they have figures that summarize them so I do not have to read much. I rarely read the conclusion section and in general I do not care much about how authors interpret their results, because people can make up anything and if one does not read the methods can get really mislead by the authors' biases.</p>
]]></description><pubDate>Sun, 05 Jan 2025 18:44:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=42603935</link><dc:creator>nis251413</dc:creator><comments>https://news.ycombinator.com/item?id=42603935</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42603935</guid></item></channel></rss>