<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: HighFreqAsuka</title><link>https://news.ycombinator.com/user?id=HighFreqAsuka</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 18:07:56 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=HighFreqAsuka" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by HighFreqAsuka in "TimeCapsuleLLM: LLM trained only on data from 1800-1875"]]></title><description><![CDATA[
<p>Take a look at The Common Pile v0.1: An 8TB Dataset of Public
Domain and Openly Licensed Text (<a href="https://arxiv.org/pdf/2506.05209" rel="nofollow">https://arxiv.org/pdf/2506.05209</a>). They build a reasonable 7B parameter model using only open-licensed data.</p>
]]></description><pubDate>Mon, 12 Jan 2026 18:18:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=46592156</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=46592156</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46592156</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Choviva: Chocolate replacement with less CO2 emissions"]]></title><description><![CDATA[
<p>Fruition is my entry recommendation. Every bar they make is good. Then Castronova, Goodnow Farms, Askinosie, Soma, and Dick Taylor. A good heuristic for evaluating a new brand is if the package tells you where the beans are from and if the ingredients list is “cocoa beans, sugar”.<p>EDIT: I realize you asked about bars not brands, but I’m in transit and brands was easier than individual bars. I’m a huge fan of the Askinosie orange bar in particular.</p>
]]></description><pubDate>Mon, 07 Apr 2025 19:51:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=43615218</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=43615218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43615218</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Choviva: Chocolate replacement with less CO2 emissions"]]></title><description><![CDATA[
<p>> Here’s what we’ve found out: It’s not about the cocoa-beans, but about the way they are treated during the manufacturing process.<p>I eat a lot of high end single origin chocolate bars, and I simply don’t believe this. Two bars from the same brand at the same percentage of cocoa content using different beans taste completely different. In exactly the same way as wine or coffee. It’s one of the most interesting parts of eating good chocolate. I just don’t believe this approach will ever replace my chocolate consumption, but may have a shot at the larger market of bad chocolate bars.</p>
]]></description><pubDate>Mon, 07 Apr 2025 19:34:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43615029</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=43615029</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43615029</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Rents Are the Fed's 'Biggest Stumbling Block' in Taming US Inflation"]]></title><description><![CDATA[
<p>We've collectively failed at this problem as a society.<p>We've 
1. made it extremely difficult, even illegal, to build more denser housing. 
2. devalued the status of electricians, plumbers, and carpenters leading to a shortage of people we need to build more homes.
3. made it much too easy to get very large mortgage loans, incentivizing people to leverage themselves much more than they should to purchase homes. Thus bidding up the price of homes as otherwise financially smart people are forced to play the game as well.
4. built software to collectively price fix rents, favoring higher rents over maximum occupancy.<p>All of this has turned housing into an asset class, in which a significant fraction of the average American's net worth is invested, and has led to huge inflows of investor money. The incentives to not fix any of this are very strong.</p>
]]></description><pubDate>Fri, 19 Apr 2024 01:32:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=40082600</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=40082600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40082600</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Data structures as topological spaces (2002) [pdf]"]]></title><description><![CDATA[
<p>Did you not read the word “potentially”? Topological spaces are a more general case of spaces that contain the discrete case as a subset.</p>
]]></description><pubDate>Fri, 16 Feb 2024 14:28:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=39397474</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=39397474</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39397474</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "LTSE shutters cap table management business (formerly captable.io)"]]></title><description><![CDATA[
<p>By having nearly no volume to begin with. Many US stocks are listed across many of the ~13 US exchanges, so what you're describing is already the status quo. According to RegNMS regulations, traders cannot trade through protected quotes, and so trading firms already have to be aware of the prices on all US exchanges.<p>But LSTE has so little volume, that you frankly forget they exist most of the time.</p>
]]></description><pubDate>Fri, 26 Jan 2024 01:34:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=39137909</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=39137909</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39137909</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory"]]></title><description><![CDATA[
<p>The point of that comment doesn't have anything to do with how ResNets actually work. You missed the actual point.<p>> We don’t actually know why resnets work so well.<p>Yes actually we do. We know, from the literature, that very deep neural networks suffered from vanishing gradients in their early layers in the same way traditional RNNs did. We know that was the motivation for introducing skip connections which gives us a hypothesis we can test. We can measure, using the test I described, the differences in the size of gradients in the early layers with and without skip connections. We can do this across many different problems for additional statistical power. We can analyze the linear case and see that the repeated matmults should lead to small gradients if their singular values are small. To ignore all of this and say that well we don't have a general proof that satisfies a mathematician so i guess we just don't know is silly.</p>
]]></description><pubDate>Tue, 02 Jan 2024 12:57:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=38841157</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38841157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38841157</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory"]]></title><description><![CDATA[
<p>Yes, I have a very good point in fact. But the above comment purposely chooses not to argue with it, because it's easier to ignore it entirely and argue something else.</p>
]]></description><pubDate>Tue, 02 Jan 2024 01:28:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=38837066</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38837066</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38837066</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory"]]></title><description><![CDATA[
<p>Empirically yes, I can consider a very deep fully-connected network, measure the gradients in each layer with and without skip connections, and compare. I can do this across multiple seeds and run a statistical test on the deltas.</p>
]]></description><pubDate>Mon, 01 Jan 2024 22:14:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=38835821</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38835821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38835821</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory"]]></title><description><![CDATA[
<p>No, there are many very mathematically inclined deep learning researchers. It's an empirical science because the mathematical tools we possess are not sufficient to describe the phenomena we observe and make predictions under one unified theory. Being an empirical science does not mean that the field is a "wild west". Deep learning models are subjectable to repeatable controlled experiments, from which you can improve your understanding of what will happen in most cases. Good practitioners know this.</p>
]]></description><pubDate>Mon, 01 Jan 2024 21:50:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=38835649</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38835649</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38835649</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory"]]></title><description><![CDATA[
<p>Just read the section on ResNets (Section 1.5) and tell me if you think that's the best way to explain ResNets to literally anyone. Tell me if, from that description, you take away that the reason skip connections improve performance is that they improve gradient flow in very deep networks.</p>
]]></description><pubDate>Mon, 01 Jan 2024 21:44:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=38835608</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38835608</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38835608</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory"]]></title><description><![CDATA[
<p>I agree with that, I think UDL uses the necessary amount of math to communicate the ideas correctly. That is obviously a good thing. What it does not do is pretend to be presenting a mathematical theory of deep learning. Basically UDL is exactly how I think current textbooks should be presented.</p>
]]></description><pubDate>Mon, 01 Jan 2024 21:07:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=38835365</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38835365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38835365</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory"]]></title><description><![CDATA[
<p>I've seen quite a few of these books attempting to explain deep learning from a mathematical perspective and it always surprises me. Deep learning is clearly an empirical science for the time being, and very little theoretical work that has been so impactful that I would think to include it in a book. Of the such books I've seen, this one seems like actively the worst one. A significant amount of space is dedicated to proving lemmas that provide no additional understanding and are only loosely related to deep learning. And a significant chunk of the code I see is just the plotting code, which I don't even understand why you'd include. I'm confident that very few people will ever read significant chunks of this.<p>I think the best textbooks are still Deep Learning by Goodfellow etal and the more modern Understanding Deep Learning (<a href="https://udlbook.github.io/udlbook/" rel="nofollow">https://udlbook.github.io/udlbook/</a>).</p>
]]></description><pubDate>Mon, 01 Jan 2024 20:48:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=38835197</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38835197</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38835197</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Can a transformer represent a Kalman filter?"]]></title><description><![CDATA[
<p>Looks like you left the submissions instructions for AISTATS on the last page of the PDF. Don't know if that was intentional but I'm guessing it wasn't.</p>
]]></description><pubDate>Thu, 14 Dec 2023 00:36:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=38636294</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38636294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38636294</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "'Greedflation' study finds many companies were lying to you about inflation"]]></title><description><![CDATA[
<p>>  however, it was not truly "independent",<p>Of course I don't mean to imply they operate in a vacuum. They can obviously see competitors raising their prices. I just mean to say they don't get into a room and actively decide on a price.</p>
]]></description><pubDate>Sat, 09 Dec 2023 01:08:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=38577186</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38577186</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38577186</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "'Greedflation' study finds many companies were lying to you about inflation"]]></title><description><![CDATA[
<p>There's a theory, price-over-volume, that the sudden shift in demand and expectation of inflation gave companies room to explore a different point on the revenue curve, where they increase the price and simply sell less volume. Prior to the pandemic this was risky and people assumed they were near optimal already. During the pandemic a bunch of companies learned they could push on price, sell less, but still make more revenue. All companies did this independently and simultaneously so the usual competitive effects didn't kick in. And now we're at a new equilibrium that the few companies in each industry are happy with.</p>
]]></description><pubDate>Sat, 09 Dec 2023 00:52:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=38577055</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38577055</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38577055</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Chemist suggests retraction of DeepMind robotic synthesis paper"]]></title><description><![CDATA[
<p>Just for clarity, the linked paper in the twitter thread is "An autonomous laboratory for the accelerated synthesis of novel materials" (<a href="https://www.nature.com/articles/s41586-023-06734-w" rel="nofollow noreferrer">https://www.nature.com/articles/s41586-023-06734-w</a>) which does have two authors from DeepMind but seems to be mostly from material science researchers at UC Berkeley. This thread is not about the recent Nature paper "Scaling deep learning for materials discovery" (<a href="https://www.nature.com/articles/s41586-023-06735-9" rel="nofollow noreferrer">https://www.nature.com/articles/s41586-023-06735-9</a>) from Deepmind which made news a few days ago.</p>
]]></description><pubDate>Sun, 03 Dec 2023 00:38:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=38503685</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38503685</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38503685</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Understanding Deep Learning"]]></title><description><![CDATA[
<p>Transformers have disadvantages too, and so LSTMs are still used in industry. But also it's not that hard to learn a couple new things every year.</p>
]]></description><pubDate>Sun, 26 Nov 2023 22:53:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=38425936</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38425936</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38425936</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Understanding Deep Learning"]]></title><description><![CDATA[
<p>Quite a lot of techniques in deep learning have stood the test of time at this point. Also new techniques are developed either depending on or trying to solved deficiencies in old techniques. For example Transformers were developed to solve vanishing gradients in LSTMs over long sequences and improve GPU utilization since LSTMs were inherently sequential in the time dimension.</p>
]]></description><pubDate>Sun, 26 Nov 2023 22:17:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=38425592</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38425592</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38425592</guid></item><item><title><![CDATA[New comment by HighFreqAsuka in "Former GitHub CEO Friedman and Scale AI CEO Wang Declined OpenAI CEO Role"]]></title><description><![CDATA[
<p>Because he's the former president of YC. One of the powerful people you'd be pissing of by doing this <i>is</i> Sam Altman.</p>
]]></description><pubDate>Tue, 21 Nov 2023 04:16:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=38359147</link><dc:creator>HighFreqAsuka</dc:creator><comments>https://news.ycombinator.com/item?id=38359147</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38359147</guid></item></channel></rss>