<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: abstractcontrol</title><link>https://news.ycombinator.com/user?id=abstractcontrol</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 13 Apr 2026 13:42:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=abstractcontrol" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by abstractcontrol in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>A stock market daytrading system, I am live coding it on my Youtube channel: <a href="https://www.youtube.com/playlist?list=PL04PGV4cTuIXoK6yBAFzhgBYq0uMCfeNo" rel="nofollow">https://www.youtube.com/playlist?list=PL04PGV4cTuIXoK6yBAFzh...</a><p>Opus has been amazingly useful at answering various statistics question that I had for it, and my current idea is a nested auction market theory inspired model. My biggest discovery is that replacing time with volume on the x axis (on a chart) and putting the bar duration on the bottom panel instead of volume normalizes the price movements and makes some of the profitable setups I've seen described in tape reading/price ladder trading courses actually visible on naked charts. A great insight I've gleamed is that variance should be proportional to volume instead of time or trade count. When plotted, it has the effect of expanding high volume areas, and compressing low volatility ones, which exposes trending price action much more readily. It honestly amazing, it's making me think that I could actually win at the trading game.</p>
]]></description><pubDate>Mon, 09 Mar 2026 18:40:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47313435</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=47313435</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47313435</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Opus 4.5 is not the normal AI agent experience that I have had thus far"]]></title><description><![CDATA[
<p>> My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated.<p>Lol, who doesn't hate that?</p>
]]></description><pubDate>Wed, 07 Jan 2026 12:52:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46525869</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=46525869</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46525869</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Reminiscences of a Stock Operator (1923)"]]></title><description><![CDATA[
<p>One takeaway from the book is that trend following strategies are really difficult to follow. Jesse Livermore had a 3 yearlong losing streak from 2011 - 2014 despite him following his rules. After the events of the book, he went short in 1929 and was reportedly worth over 100m in that time, a huge amount. Then he lost it all in the strongly mean reverting markets of the 1930s where his trend following strategy didn't work.<p>He was a problem gambler, but I think if we looked at top poker players of today, they'd all have some love the gamble in them. Jesse had godly tape reading skills that allowed him to beat the bucket shops at the start of his career.<p>After being kicked out of the bucket shops, he should have just become a floor trader and in all likelihood, he'd have had lower highs but would have fared a lot better overall. A lot of the trading cliches like cutting trading losses quickly, letting profits run, averaging up rather than down originate from this book. There is a reason people still talk about it 100 years after its publication. It's a good contender for the best trading book of all time.</p>
]]></description><pubDate>Thu, 01 Jan 2026 11:01:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46453116</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=46453116</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46453116</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Bitter Lesson is about AI agents"]]></title><description><![CDATA[
<p>> Investment Strategy: Organizations should invest more in computing infrastructure than in complex algorithmic development.<p>> Competitive Advantage: The winners in AI won’t be those with the cleverest algorithms, but those who can effectively harness the most compute power.<p>> Career Focus: As AI engineers, our value lies not in crafting perfect algorithms but in building systems that can effectively leverage massive computational resources. That is a fundamental shift in mental models of how to build software.<p>I think the author has a fundamental misconception what making best use of computational resources requires. It's algorithms. His recommendation boils down to not do the one thing that would allow us to make the best use of computational resources.<p>His assumptions would only be correct if all the best algorithms were already known, which is clearly not the case at present.<p>Rich Sutton said something similar, but when he said it, he was thinking of old engineering intensive approaches, so it made sense in the context in which he said it and for the audience he directed it at. It was hardly groundbreaking either, the people whom he wrote the article for all thought the same thing already.<p>People like the author of this article don't understand the context and are taking his words as gospel. There is no reason not to think that there won't be different machine learning methods to supplant the current ones, and it's certain they won't be found by people who are convinced that algorithmic development is useless.</p>
]]></description><pubDate>Sun, 23 Mar 2025 20:34:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43455678</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=43455678</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43455678</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Perplexity Deep Research"]]></title><description><![CDATA[
<p>Can't find it either.</p>
]]></description><pubDate>Sun, 16 Feb 2025 08:40:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43066439</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=43066439</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43066439</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Gary Marcus discusses AI's technical problems"]]></title><description><![CDATA[
<p>The strengths and weaknesses of the algorithmic niche that artificial NNs are in hasn't changed a bit since a decade ago. They are still bad at anything I'd want to actually use them for that you'd imagine actual AI would be good at. The only thing that has changed is people's perception. LLMs found a market fit, but if you notice, compared to last decade where we had Deepmind and OpenAI competing at actual AI in games like Go and Starcraft, they've pretty much given up on that in favor on hyping text predictors. For anybody in the field, it should be an obvious bubble.<p>Underneath it all, there is some hope that an innovation might come about to keep the wave going, and indeed, a new branch of ML being discovered could revolutionize AI and actually be worthy of the hype that LLMs have now, but that has nothing to do with the LLM craze.<p>It's cool that we have them, and I also appreciate what Stable Diffusion has brought to the world, but in terms of how much LLMs influenced me, they only shorted the time it takes for me to read the documentation.<p>I don't think that machines cannot be more intelligent than humans. I don't think that the fact that they use linear algebra and mathematical functions makes the computers inferior to humans. I just think that the current algorithms suck. I want better algos so we can have actual AI instead of this trash.</p>
]]></description><pubDate>Sat, 15 Feb 2025 17:43:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=43060431</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=43060431</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43060431</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Gary Marcus discusses AI's technical problems"]]></title><description><![CDATA[
<p>To me, the current LLMs aren't qualitatively different from the char RNNs that Karpathy showcased all the way back in 2015. They've gotten a lot more useful, but that is about it. Current LLMs will have as much to do with GAI as computer games have to do with NNs. Which is to say, games were necessary to develop GPUs which were then used to train NNs, and current LLMs are necessary to incentivize even more powerful hardware to come into existence, but there isn't much gratitude involved in that process.</p>
]]></description><pubDate>Sat, 15 Feb 2025 08:32:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=43056926</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=43056926</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43056926</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Adding row polymorphism to Damas-Hindley-Milner"]]></title><description><![CDATA[
<p>I've thought about adding record row polymorphism to Spiral, but I am not familiar with it and couldn't figure out how to make it work well in the presence of generics.</p>
]]></description><pubDate>Wed, 23 Oct 2024 13:42:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=41924987</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=41924987</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41924987</guid></item><item><title><![CDATA[Show HN: Spiral mini-tutorial for ML library authors]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/mrakgr/Tutorials/blob/master/spiral_mini_tutorial_for_ml_library_authors.md">https://github.com/mrakgr/Tutorials/blob/master/spiral_mini_tutorial_for_ml_library_authors.md</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=41545667">https://news.ycombinator.com/item?id=41545667</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 15 Sep 2024 06:36:22 +0000</pubDate><link>https://github.com/mrakgr/Tutorials/blob/master/spiral_mini_tutorial_for_ml_library_authors.md</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=41545667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41545667</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Ask HN: Resources for GPU Compilers?"]]></title><description><![CDATA[
<p>Staged FP in Spiral: <a href="https://www.youtube.com/playlist?list=PL04PGV4cTuIVP50-B_1scXUUMn8qEBbSs" rel="nofollow">https://www.youtube.com/playlist?list=PL04PGV4cTuIVP50-B_1sc...</a><p>Some of the stuff in this playlist might be relevant to you, though it is mostly about programming GPUs in a functional language that compiles to Cuda. The author (me) sometimes works on the language during the video, either fixing bugs or adding new features.</p>
]]></description><pubDate>Fri, 06 Sep 2024 11:22:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=41465086</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=41465086</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41465086</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Lessons learned from losing tens of thousands of dollars freelancing"]]></title><description><![CDATA[
<p>What's a Net/60 basis? I am having trouble understanding how often you were paid. Every month or so?<p>Edit: Nwm, I saw you worked for 90 days without pay. Ack.</p>
]]></description><pubDate>Tue, 20 Aug 2024 15:41:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=41301063</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=41301063</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41301063</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Ask HN: Share Your YoutTube Channel"]]></title><description><![CDATA[
<p><a href="https://www.youtube.com/playlist?list=PL04PGV4cTuIVP50-B_1scXUUMn8qEBbSs" rel="nofollow">https://www.youtube.com/playlist?list=PL04PGV4cTuIVP50-B_1sc...</a><p>Staged Functional Programming In Spiral<p>I am doing a fully fused ML GPU library along with a poker game to run it on in my own programming language that I've worked on for many years. Currently, right at this very moment, I am trying to optimize compilation times along with register usage by doing more on the heap, so I am creating a reference counting Cuda backend for Spiral.<p>Both the ML library and the poker game are designed to run completely on GPU for the sake of getting large speedups.<p>Once I am done with this and have trained the agent, I'll test it out on play money sites, and if that doesn't get it eaten by the rake, with real money.<p>I am doing fairly sophisticated functional programming in the videos, the kind you could only do in the Spiral language. Many parts of the series involve me working and improving the language itself in F#.</p>
]]></description><pubDate>Sat, 08 Jun 2024 17:34:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=40619074</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=40619074</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40619074</guid></item><item><title><![CDATA[New comment by abstractcontrol in "GPUs Go Brrr"]]></title><description><![CDATA[
<p>Yes.</p>
]]></description><pubDate>Tue, 14 May 2024 07:34:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=40352568</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=40352568</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40352568</guid></item><item><title><![CDATA[New comment by abstractcontrol in "GPUs Go Brrr"]]></title><description><![CDATA[
<p>NNs for example are (mostly) a sequence of matrix multiplication operations, and GPUs are very good at those. Much better than CPUs. AI is hot at the moment, and Nvidia is producing the kind of hardware that can run large models efficiently which is why it's a 2 trillion-dollar company right now.<p>However, in the Spiral series, I aim to go beyond just making an ML library for running NN models and break new ground.<p>Newer GPUs actually support dynamic memory allocation, recursion, and the GPU threads have their own stacks, so you could in fact treat them as sequential devices and write games and simulators directly on them. I think once I finish the NL Holdem game, I'll be able to get over 100x fold improvements by running the whole program on the GPU versus the old approach of writing the sequential part on a CPU and only using the GPU to accelerate a NN model powering the computer agents.<p>I am not sure if this is a good answer, but this is how GPU programming would be helpful to me. It all comes down to performance.<p>The problem with programming them is that the program you are trying to speed up needs to be specially structured, so it utilizes the full capacity of the device.</p>
]]></description><pubDate>Mon, 13 May 2024 16:47:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=40345294</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=40345294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40345294</guid></item><item><title><![CDATA[New comment by abstractcontrol in "GPUs Go Brrr"]]></title><description><![CDATA[
<p>Never tried those, so I couldn't say. I guess it would.<p>Even so, creating all the abstractions needed to implement even regular matrix multiplication in Spiral in a generic fashion took me two months, so I'd consider that good enough exercise.<p>You could do it a lot faster by specializing for specific matrix sizes, like in the Cuda examples repo by Nvidia, but then you'd miss the opportunity to do the tensor magic that I did in the playlist.</p>
]]></description><pubDate>Mon, 13 May 2024 16:36:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=40345160</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=40345160</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40345160</guid></item><item><title><![CDATA[New comment by abstractcontrol in "GPUs Go Brrr"]]></title><description><![CDATA[
<p>For a deep dive, maybe take a look at the Spiral matrix multiplication playlist: <a href="https://www.youtube.com/playlist?list=PL04PGV4cTuIWT_NXvvZsnlqYEUy6OmIc3" rel="nofollow">https://www.youtube.com/playlist?list=PL04PGV4cTuIWT_NXvvZsn...</a><p>I spent 2 months implementing a matmult kernel in Spiral and optimizing it.</p>
]]></description><pubDate>Mon, 13 May 2024 11:11:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=40341854</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=40341854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40341854</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Jim Keller criticizes Nvidia's CUDA, x86"]]></title><description><![CDATA[
<p>> Groq sells their dev kit for $20k even though a single LPU is useless.<p>I find this a very questionable business decision.</p>
]]></description><pubDate>Sat, 24 Feb 2024 09:33:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=39490333</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=39490333</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39490333</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Jim Keller criticizes Nvidia's CUDA, x86"]]></title><description><![CDATA[
<p>> I have quite a lot of concurrency so I think my ideal hardware is a whole lot of little CPU cores with decent cache and matmul intrinsics<p>Back in 2015 I thought this would be the dominant model in 2022. I thought that the AI startups challenging Nvidia would be about that. Instead, they all targetted inference instead of programmability. I thought that a Tenstorrent hardware would be about what you are talking about - lots of tiny cores, local memory, message passing between them, AI/matmult intrinsics.<p>I've been hyped about Tenstorrent for a long time, but now that it is finally coming out with something, I can see that the Grayskulls are very overpriced. And if you look at the docs for their low-level kernel programming, you will see that Tensix cores can only have four registers, have no register spilling, and also don't support function calls. What would one be able to program with that?<p>It would have been interesting had the Grayskull cards been released in 2018. But in 2024 I have no idea what the company wants to do with them. It's over five years behind what I was expecting.<p>My expectations for how the AI hardware wave would unfold were fit for another world entirely. If this is the best the challengers can do, the most we can hope for is that they depress Nvidia's margins somewhat so we can buy its cards cheaper in the future. As we go towards the Singularity, I've gone from expecting revolutionary new hardware from AI startups to hoping Nvidia can keep making GPUs faster and more programmable.<p>Ironically, that latter thing is one trend that I missed, and going from Maxwell cards to the last generation, the GPUs have gained a lot in terms of how general purpose they are. The range of domains they can be used for is definitely going up as time goes on. I thought that AI chips would be necessary for this, and that GPUs would remain as toys, but it has been the other way around.</p>
]]></description><pubDate>Fri, 23 Feb 2024 18:25:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=39484276</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=39484276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39484276</guid></item><item><title><![CDATA[New comment by abstractcontrol in "Upstart retrofits an Nvidia GH200 server into a workstation"]]></title><description><![CDATA[
<p>Considering the system only has a single H100, why would it be that performant?</p>
]]></description><pubDate>Fri, 16 Feb 2024 07:33:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=39394247</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=39394247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39394247</guid></item><item><title><![CDATA[New comment by abstractcontrol in "AMD's Next GPU Is a 3D-Integrated Superchip"]]></title><description><![CDATA[
<p>BoM?</p>
]]></description><pubDate>Fri, 15 Dec 2023 16:09:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=38655515</link><dc:creator>abstractcontrol</dc:creator><comments>https://news.ycombinator.com/item?id=38655515</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38655515</guid></item></channel></rss>