<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: accounting2026</title><link>https://news.ycombinator.com/user?id=accounting2026</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 14:37:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=accounting2026" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by accounting2026 in "Tinnitus Is Connected to Sleep"]]></title><description><![CDATA[
<p>Just wondering do you think you got tinnitus or was it there and you suddenly started noticing? I don't know I got it around 20y ago but I'm honestly unsure if it was one or the other because it became worse and worse the more I started focusing on it. Eventually it subsided. I can still hear it if I listen for it (as I just did now and I can hear a distinct 'bruising' kind of sound) but there's literally months between I even think of it or notice it. There have been studies that lots of 'normal' people notice tinnitus when they enter a sound-proof room. 
What helped me was just taking long showers - I literally couldn't hear a thing during the shower and some time after. And it seems the 'drown out' period would last longer. And just knowing something would stop it somehow made me ease more into it and maybe reduced the fear that had been programmed into my brain. I also did omega 3 and gingo biloba (just low doses) and felt like it had some effect.
Was there any trigger and how 'loud' do you perceive it?</p>
]]></description><pubDate>Sat, 07 Mar 2026 15:19:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47288399</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=47288399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47288399</guid></item><item><title><![CDATA[New comment by accounting2026 in "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"]]></title><description><![CDATA[
<p>While I can see your point I also think it is not directly relevant to OP. Firstly, I don't think OP meant that people are idiots for using LLM's, it was just a way of saying that skill is no longer required so even idiots can do it whereas it used to be something that required high skill.<p>As for the comparisons - some are partly comparable to the current situation, but there's some differences as well. Sure books and online content enabled others to join, thereby reducing the "moat" for those who built careers on esoteric knowledge. But it didn't make things _that_ easy - it still required years of invested time to become a good developer. Also, it happened very gradually and while the developer pie was growing, and the range of tech growing, so developers who kept on top of technology (like OP did) could still be valuable. Of course, no one knows fully how it will play out this time around; maybe the pie will get even bigger, maybe there's still room for lots of developers and the only difference is that the tedious work is done. Sure, then it is comparable. But let's be honest, this has a very real chance of being different (humans inventing AI surely is something special!) and could result in skill-sets collapsing in value at record time. And perhaps worse, without opening new doors. Sure, new types of jobs may appear but they may be so different that they are essentially completely different careers. It is not like in the past you just needed to learn a new programming language.</p>
]]></description><pubDate>Sat, 07 Mar 2026 14:53:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=47288170</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=47288170</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47288170</guid></item><item><title><![CDATA[New comment by accounting2026 in "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"]]></title><description><![CDATA[
<p>I started at 16, 44M now, but also remember all that COM stuff, writing shell extensions for Windows 95 and stuff. And reading about it in the press (MSDN Magazine?). It was the new AI then ;)<p>I think you really hit the jackpot because you got a full career out of it, saw an amazing evolution etc. So you can hopefully enjoy the ride now being more as a spectator without the fear of being personally affected by job displacement. Enjoy the retirement!</p>
]]></description><pubDate>Sat, 07 Mar 2026 14:35:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47288016</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=47288016</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47288016</guid></item><item><title><![CDATA[New comment by accounting2026 in "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"]]></title><description><![CDATA[
<p>Same boat (though 44M) - I don't think it has become less fun, on the contrary it can help with the stuff that was trivial but could still take time to get right. Now it can crank out that stuff often correctly on first try. Of course I have the same fear of job security as everyone else and it is sad to see something you were good at being taken over by machines, but it is not because I enjoy the work itself less, quite the contrary.</p>
]]></description><pubDate>Sat, 07 Mar 2026 14:32:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47287989</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=47287989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47287989</guid></item><item><title><![CDATA[New comment by accounting2026 in "Altman on AI energy: it also takes 20 years of eating food to train a human"]]></title><description><![CDATA[
<p>Your numbers are about how much is used also for transport etc. Sam's number were about what the human body itself uses for training, hence why I used the caloric consumption.</p>
]]></description><pubDate>Tue, 24 Feb 2026 12:50:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47136452</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=47136452</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47136452</guid></item><item><title><![CDATA[New comment by accounting2026 in "Altman on AI energy: it also takes 20 years of eating food to train a human"]]></title><description><![CDATA[
<p>I didn't read/hear it as reducing human life to 'training energy', but I don't like the comparison at the technical level.<p>Firstly, the math isn't even close. A human being consumes maybe 15 MWh of food energy from years 0 to 20. Modern frontier models take on the order of 100,000 MWh to train. It's a 10,000x difference. Furthermore, the human is actively doing 'inference' (living, acting, producing) during those 20 years of training and is also doings lots of non-brain stuff.
Besides the energy math, it's comparing apples-to-oranges. A human brain doesn't start out as a blank slate; it has billions of years of evolutionary priors for language and spatial reasoning that LLMs have to teach themselves from scratch, so this could explain why a human can do some things cheaper. Also, the learning material available to a human is inherently created to be easily ingested by a human brain, whereas a blank LLM needs to build the capacity to process that data.
Altman seems to hint at a comparison to the whole human evolution, but that seems unfair in the other direction, because humans and human evolution had to make discoveries from scratch and trial and error whereas LLMs get to ingest the final "good stuff". But either way you slice it, it's just not a good comparison, though not an 'inhuman' or immoral one.</p>
]]></description><pubDate>Sun, 22 Feb 2026 18:30:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47113413</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=47113413</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47113413</guid></item><item><title><![CDATA[New comment by accounting2026 in "The Singularity will occur on a Tuesday"]]></title><description><![CDATA[
<p>No one ever made the claim it was magic, not even remotely. Regarding the rest of your commentary: a) The original claim was that LLM's were not understood and are a black box. b) Then someone claims that this is not true, and they know well how LLM's work, it is simply due to questions & answers being in close textual proximity in training data. c) I then claim this is a shallow explanation because you then need to invoke additionally a huge abstraction network - that is a black box, d) you seem to agree with this while at the same time saying I misrepresented "b" - which I don't think I did. They really claimed they understood it and only offered this textual proximity thing.<p>In general, every attempt at explanation of LLM's that  appeal to "[just] predicting next token" is thought terminating and automatically invalid as explanation. Why? Because it is confusing the objective function with the result. It adds exactly zero over saying "I know how a chess engine works, it just predicts the next move and has been trained to predict the next move" or "A talking human just predicts the next word, as it was trained to do". It says zero about how this is done internally in the model. You could have a physical black box predicting the next token, and inside you could have simple frequentist tables or you could have a human brain or you could have an LLM. In all cases you could say the box is predicting the next token and if any training was involved you could say it was trained to predict the next token.</p>
]]></description><pubDate>Wed, 11 Feb 2026 07:31:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46971987</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46971987</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46971987</guid></item><item><title><![CDATA[New comment by accounting2026 in "The Singularity will occur on a Tuesday"]]></title><description><![CDATA[
<p>If such a simplistic explanation was true, LLM's would only be able to answer things that had been asked before, and where at least a 'fuzzy' textual question/answer match was available. This is clearly not the case. In practice you can prompt the LLM with such a large number of constraints, so large that the combinatorial explosion ensures no one asked that before. And you will still get a relevant answer combining all of those. Think combinations of features in a software request - including making some module that fits into your existing system (for which you have provided source) along with a list of requested features. Or questions you form based on a number of life experiences and interests that combined are unique to you. You can switch programming language, human language, writing styles, levels as you wish and discuss it in super esoteric languages or morse code. So are we to believe this answers appear just because there happened to be similar questions in the training data where a suitable answer followed? Even if for the sake of argument we accept this  explanation by "proximity of question/answer", it is immediately that this would have to rely on extreme levels of abstraction and mixing and matching going on inside the LLM. And that it is then this process that we need to explain how works, whereas the textual proximity you invoke relies on this rather than explaining it.</p>
]]></description><pubDate>Tue, 10 Feb 2026 21:06:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46966927</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46966927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46966927</guid></item><item><title><![CDATA[New comment by accounting2026 in "Ode to the AA Battery"]]></title><description><![CDATA[
<p>I have a weather station that takes two 1.2 V. The LCD screen is a bit dim compared to when used with fresh 1.5 V alkalines. Other than that, most things take the 1.2 V well. But they better do because alkalines reach 1.2 V with >50% capacity left.</p>
]]></description><pubDate>Fri, 30 Jan 2026 21:43:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46830338</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46830338</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46830338</guid></item><item><title><![CDATA[New comment by accounting2026 in "Ode to the AA Battery"]]></title><description><![CDATA[
<p>At our high school we each had to buy a TI-83 calculator kit, and it came with one of those Rayovac alkaline chargers.<p>I also had a Seitek Eco charger that could charge "normal" alkalines. But you had to be careful not to discharge them too deep. It seemed kind of pointless over rechargebles though the capacity of NiCD/NiMH was way lower back then (I remember when NiMH AA batteries at 700 mAh were considered really high!). And perhaps it some devices it was great they held 1.5 V.</p>
]]></description><pubDate>Fri, 30 Jan 2026 21:40:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46830297</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46830297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46830297</guid></item><item><title><![CDATA[New comment by accounting2026 in "Ode to the AA Battery"]]></title><description><![CDATA[
<p>About 15 years ago I was writing software for an embedded device made by another company, and they sent us a unit for testing. It had a small rectangular rechargeable lithium battery that was charged via a DC jack.<p>At one point I hadn’t kept it charged, the battery went completely flat, and after that it would no longer charge at all. When I called the company, they said the battery was now too deeply discharged and required an “intelligent” charger to revive it. They sent a charger with a slot for the bare battery; some LEDs blinked in various patterns for a while, and eventually normal charging resumed.<p>I’ve always wondered what that charger actually did, that the built-in charger was not capable of. Was it performing some kind of analysis to decide whether the battery was safe to recover (e.g. after deep discharge), or was it simply applying some initial charge ignoring the battery’s protection circuitry (and at what risk)?</p>
]]></description><pubDate>Fri, 30 Jan 2026 21:37:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46830262</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46830262</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46830262</guid></item><item><title><![CDATA[New comment by accounting2026 in "Television is 100 years old today"]]></title><description><![CDATA[
<p>Yes knew what you meant, and fully agree. It is fascinating TV is even possible just out of all these rather simple and bulky analog components. Even the first color TV's were with vacuum tubes and no transitors.<p>As I recall there's all kinds of hacks in the design to keep them cheap. For instance, letting the fly-back transformer for producing the high voltages needed operate at the same frequency as the horizontal scan rate (~15 kHz) so that mechanism essentially serves double duty. The same was even seen in microcomputers where the same crystal needed for TV was also used for the microprocessor - meaning that e.g. a "European" Commodore 64 with PAL was actually a few percent slower than an American C64 with NTSC. And other crazy things like that.</p>
]]></description><pubDate>Tue, 27 Jan 2026 20:08:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46785737</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46785737</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46785737</guid></item><item><title><![CDATA[New comment by accounting2026 in "Television is 100 years old today"]]></title><description><![CDATA[
<p>Actually, the voltages had to be raised due to the shadow mask, and this rise in voltage meant you were now in x-ray territory, which wasn't the case before. The infamous problems with TV's emitting x-rays and associate recalls were the early color TV's. And it wasn't so much from the tube, but from the shunt regulators etc. in the power supply that were themselves vacuum tubes. If you removed the protection cans around those you would be exposed to strong radiation. Most of that went away when the TV's were transistorized so the high-voltage circuits didn't involve vacuum tubes.</p>
]]></description><pubDate>Tue, 27 Jan 2026 18:13:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=46783927</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46783927</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46783927</guid></item><item><title><![CDATA[New comment by accounting2026 in "Television is 100 years old today"]]></title><description><![CDATA[
<p>Yes agreed! And while it is not quantized as such there is an element of semi-digital protocol to it. The concept of "scanline" is quantized and there's "protocols" for indicating when a line ends, and a picture ends etc. that the receiver/send needs to agree on... and "colorbursts packets" for line, delay lines and all kinds of clever technique etc. so it is extremely complicated. Many things were necessary to overcome distortion and also to ensure backwards compatibility - first, how do you fit in the color so a monochrome TV can still show it? Later, how do you make it 16:9 and it can still show on a 4:3 TV (which it could!).</p>
]]></description><pubDate>Tue, 27 Jan 2026 16:35:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46782286</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46782286</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46782286</guid></item><item><title><![CDATA[New comment by accounting2026 in "Television is 100 years old today"]]></title><description><![CDATA[
<p>Yes that is called "PAL-S". But the system was designed to use the delay-line method and it was employed since the inception (first broadcast 1967).</p>
]]></description><pubDate>Tue, 27 Jan 2026 16:31:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46782233</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46782233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46782233</guid></item><item><title><![CDATA[New comment by accounting2026 in "Television is 100 years old today"]]></title><description><![CDATA[
<p>This stores a whole scanline <a href="https://www.youtube.com/watch?v=bsk4WWtRx6M" rel="nofollow">https://www.youtube.com/watch?v=bsk4WWtRx6M</a>. This or something similar was in almost any decent color TV except for the oldest.</p>
]]></description><pubDate>Tue, 27 Jan 2026 14:53:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46780756</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46780756</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46780756</guid></item><item><title><![CDATA[New comment by accounting2026 in "Television is 100 years old today"]]></title><description><![CDATA[
<p>Yes - and x-rays too! Both from the main TV tube itself (though often shielded) but historically the main problem was actually the vacuum rectifiers used to generate the high voltages required. Those vacuum tubes essentially became x-ray bulbs and had to be shielded. This problem appeared as the first color TV's appeared in the late 60s. Color required higher voltages for the same brightness, due to the introduction of a mask that absorbed a lot of the energy. As a famous example, certain GE TV's would emit a strong beam of x-rays, but it was downwards so it would mostly expose someone beneath the TV. Reportedly a few models could emit 50,000 mR/hr at 9 inches distance  <a href="https://www.nytimes.com/1967/07/22/archives/owners-of-9000-color-tv-sets-warned-of-rays-us-asserts-unlocated-ge.html" rel="nofollow">https://www.nytimes.com/1967/07/22/archives/owners-of-9000-c...</a> which is actually quite a lot (enough for radiation sickness after a few hours). All were recalled of course!</p>
]]></description><pubDate>Tue, 27 Jan 2026 12:10:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46778938</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46778938</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46778938</guid></item><item><title><![CDATA[New comment by accounting2026 in "Television is 100 years old today"]]></title><description><![CDATA[
<p>> The image is not stored at any point.<p>Just wanted to add one thing, not as a correction but just because I learned it recently and find it fascinating. PAL televisions (the color TV standard in Europe) actually do store one full horizontal scanline at a time, before any of it is drawn on the screen. This is due to a clever encoding used in this format where the TV actually needs to average two successive scan lines (phase-shifted compared to each other) to draw them. Supposedly this cancels out some forms of distortion. It is quite fascinating this was even possible with analogue technology. The line is stored in a delay line for 64 microseconds.
See e.g.: <a href="https://www.youtube.com/watch?v=bsk4WWtRx6M" rel="nofollow">https://www.youtube.com/watch?v=bsk4WWtRx6M</a></p>
]]></description><pubDate>Mon, 26 Jan 2026 23:59:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46773535</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46773535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46773535</guid></item><item><title><![CDATA[New comment by accounting2026 in "10 years of personal finances in plain text files"]]></title><description><![CDATA[
<p>For me I wanted to know how much I'm spending spend in different categories - not just for the phone bill where you can just see what it is every month, but for more irregular expenses: travel, repairs, appliances etc.<p>So why is this interesting to know? It gives you a better picture of where your money goes, what your bur rate is, how much of your spend is on mandatory stuff (food, rent/mortage etc.) and how much is on things you could potentially cut if need be. It makes it possible to know how far you are from various levels (/lifestyles) of financial independence. I have a pretty good idea that I could make it to retirement even if I permanently lost my job and I know what that lifestyle would roughly be like.<p>This also makes it possible to do proper budgeting by setting aside realistic amounts even for irregular expenses, so you're essentially 'saving up' for the irregular expenses. Done right, your "expenses" (money set aside in budget) are exactly the same month after month even though your actual money outflows in some months may be much higher. This is both true for the irregular expenses like travel, but also for semi-regular expenses that may come due only annually or every three months (mortgage etc.).<p>For me personally, it made me feel much more comfortable about spending larger amounts on travel etc. because I've already saved up the money via budgeting and I've never seen it as savings that goes away. Savings are per definition the difference in monthly income vs month set aside in budget. I make sure to be a bit conservative, so I actually also build a buffer on the savings account, usable for truly extraordinary expenses.<p>But your current situation sounds healthy - and if you have amble cushion, and you're confident it will continue that way and you have no desire to change it yourself, the above is of course less relevant.</p>
]]></description><pubDate>Sat, 03 Jan 2026 23:41:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46483040</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46483040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46483040</guid></item><item><title><![CDATA[New comment by accounting2026 in "10 years of personal finances in plain text files"]]></title><description><![CDATA[
<p>I was confused about it for a long time, though. I recall the confusion centered around terminology. I'm from Denmark and there we say "aktiver" (actives) must equal "passiver" (passives/liabilities) - which seemed very unintuitive. Why would they be equal - and exactly equal (down to the last cent)?
I mean, what if I buy a piece of candy and then eat it? Even if we accept that buying it turns it into an asset, wouldn't eating it cause a reduction in assets and thereby make the two sides diverge? Surely when eaten, something is lost, so again how can they be equal?
I saw the 'light' when I started tracking my own finances in a spreadsheet. I started tracking the value of all accounts in a separate sheet, summing them to see how much money I had. Then, I started tracking all expenses to see in detail how much I spent each month.
Eventually, it dawned on me to connect these subsheets to check that I hadn't forgotten an expense or typed a wrong number. It worked like this: every month I summed up the accounts and made a note. At the end of the next month, I would update the account values with new balances from the bank, and then verify that the change in balance matched exactly the sum of the spending (plus income).
Then I realized: hey, maybe this is what double entry accounting is all about. It is mostly a terminology question; the "assets equal liabilities" phrasing is what trips people up, and the candy mystery is of course explained by just having various entries accounting for this.
While I do know graph theory, I don't see the relation to double entry accounting. I can't imagine how introducing graph theory simplifies it, though maybe I don't truly understand it after all ;)</p>
]]></description><pubDate>Sat, 03 Jan 2026 23:28:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=46482913</link><dc:creator>accounting2026</dc:creator><comments>https://news.ycombinator.com/item?id=46482913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46482913</guid></item></channel></rss>