<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: SimplyUnknown</title><link>https://news.ycombinator.com/user?id=SimplyUnknown</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 15:45:25 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=SimplyUnknown" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by SimplyUnknown in "The Burrows-Wheeler Transform"]]></title><description><![CDATA[
<p>I'm still not sure I get it. I think it is<p>1. Put the BWT string in the right-most empty column<p>2. Sort the rows of the matrix such that the strings <i>read along the columns of the matrix</i> are in lexicographical order starting from the top-row????<p>3. Repeat step 1 and 2 until matrix is full<p>4. Extract the row of the matrix that has the end-delimiter in the final column<p>It's the "sort matrix" step that seems under-explained to me.</p>
]]></description><pubDate>Fri, 10 Oct 2025 08:27:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45536528</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=45536528</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45536528</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "I'm Unsatisfied with Easing Functions"]]></title><description><![CDATA[
<p>I have the feeling that B-splines would be a good solution for this problem. Given that they have a continuous zeroth (i.e., the function is continuous),  first, and second derivative, the motion will always be smooth and there will be no kinks. However, maybe it's moving the problem because now you must tune the coefficients of the B-spline instead of damping parameters (even though a direct mapping between these must exist but this mapping may not be trivial).</p>
]]></description><pubDate>Wed, 23 Jul 2025 21:43:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44664353</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=44664353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44664353</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Why JPEGs Still Rule the Web After 30 Years (2024)"]]></title><description><![CDATA[
<p>Multiple reasons, while technically better and more benign compression artifacts, it is computationally more expensive, limited quality improvements, encumbered by patents, poor Metadata format, poor colorspace support... In the end, the benefits aren't great enough compared to jpeg to change the default format</p>
]]></description><pubDate>Tue, 17 Jun 2025 15:48:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44300547</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=44300547</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44300547</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "I don't like NumPy"]]></title><description><![CDATA[
<p>I really like einops. This works for numpy, pytorch and keras/tensorflow and has easy named transpose, repeat, and eimsum operations.</p>
]]></description><pubDate>Thu, 15 May 2025 20:55:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43999255</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=43999255</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43999255</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Tumor-derived erythropoietin acts as immunosuppressive switch in cancer immunity"]]></title><description><![CDATA[
<p>Full paper link for the interested: <a href="https://ehdijrb3629whdb.tiiny.site" rel="nofollow">https://ehdijrb3629whdb.tiiny.site</a></p>
]]></description><pubDate>Fri, 25 Apr 2025 18:48:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=43797243</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=43797243</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43797243</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Pixel is a unit of length and area"]]></title><description><![CDATA[
<p>In medical imaging, data are often acquired using anisotropic resolution. So a pixel (or voxel in 3D) can be an averaged signal sample originating from 2mm of tissue in one direction and 0.9mm in another direction.</p>
]]></description><pubDate>Thu, 24 Apr 2025 06:25:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=43779822</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=43779822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43779822</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Solving Boolean satisfiability and integer programming with Python packaging"]]></title><description><![CDATA[
<p>Conda indeed is slow. However, mamba is a drop in replacement for Conda and uses a way faster solver, which makes it a lot more palatable.</p>
]]></description><pubDate>Tue, 26 Nov 2024 09:52:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=42244130</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=42244130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42244130</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Visualizing World War II"]]></title><description><![CDATA[
<p>Not quite what you are looking but if you're interested in Operation Market Garden: for the Dutch maps there is <a href="https://www.topotijdreis.nl" rel="nofollow">https://www.topotijdreis.nl</a>, which gives you historical maps with a year slider. This can at least help one visualize how cities, villages, and topography at through the years.</p>
]]></description><pubDate>Tue, 12 Nov 2024 17:03:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=42117288</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=42117288</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42117288</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "National Popular Vote Interstate Compact"]]></title><description><![CDATA[
<p>CGP Grey also made an excellent video about it, which he dubbed the NaPoVoInterCo: <a href="https://www.youtube.com/watch?v=tUX-frlNBJY" rel="nofollow">https://www.youtube.com/watch?v=tUX-frlNBJY</a></p>
]]></description><pubDate>Mon, 12 Aug 2024 22:01:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=41229851</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=41229851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41229851</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Kolmogorov Complexity and Compression Distance (2023)"]]></title><description><![CDATA[
<p>But Chinese (or mandarin) is not a context-free grammar whereas I believe that encoding a language on a turing machine implies a context-free grammar so this example doesn't hold.</p>
]]></description><pubDate>Sat, 30 Mar 2024 22:48:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=39879445</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=39879445</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39879445</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Is Cosine-Similarity of Embeddings Really About Similarity?"]]></title><description><![CDATA[
<p>I think maybe it's poorly phrased. As far as I can tell, their linear regression example for eq. 2 has an unique solution, but I think they state I that when optimizing for cosine similarity you can find non-unique solutions. But I haven't read in detail.<p>Then again, you could argue whether that is a problem when considering very high dimensional embeddings. Their conclusions seem to point in that direction but I would not agree on that.</p>
]]></description><pubDate>Tue, 12 Mar 2024 08:15:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=39677241</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=39677241</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39677241</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Synthetic data generation for tabular data"]]></title><description><![CDATA[
<p>The thing is you use synthetic data to when it is difficult to obtain real data. For example, in medical imaging, it is very expensive to collect MRI scans to build a large dataset. Not to mention the potential privacy issues and obtaining informed consent to publish the dataset. Synthetic datasets can help here to, for example, pretrain your model and fine tune on real data afterwards. I'm then assuming that collecting tabular data can face similar issues which prevent building large datasets.</p>
]]></description><pubDate>Wed, 28 Feb 2024 09:45:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=39535895</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=39535895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39535895</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Advent of Code 2023's new AI/LLM Policy"]]></title><description><![CDATA[
<p>Surely not, but if you practicing to get better at programming using AoC, LLMs are unlikely to help you.</p>
]]></description><pubDate>Mon, 16 Oct 2023 16:32:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=37902316</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=37902316</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37902316</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Advent of Code 2023's new AI/LLM Policy"]]></title><description><![CDATA[
<p>Maybe better programmer not so much, but I'd say the spirit of AoC is to crack a puzzle and translate the solution into code. Solving this using LLM will not help you getting better at solving puzzles, solving puzzles, or grasp the main concepts of a programming language. In that sense, ChatGPT is not a pedagogical asset in this particular instance.</p>
]]></description><pubDate>Mon, 16 Oct 2023 15:58:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=37901726</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=37901726</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37901726</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Mistral 7B"]]></title><description><![CDATA[
<p>Obviously you can, but in the grand scheme of things people should share more details about their method so people can improve on it in the future, no?</p>
]]></description><pubDate>Wed, 11 Oct 2023 15:52:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=37846075</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=37846075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37846075</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Mistral 7B"]]></title><description><![CDATA[
<p>I mean, you can't just share the weights of the model and call it a day, right? You have to share details on what and why you are doing. You must communicate this somehow. In theory, you might be able to do this in a github readme, but a paper-style document on arxiv is nicely suited for this.</p>
]]></description><pubDate>Wed, 11 Oct 2023 15:38:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=37845890</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=37845890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37845890</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Northern summer was hottest on record by a significant margin"]]></title><description><![CDATA[
<p>I guessed as much, but I wasn't able to confirm. Thanks for the search.</p>
]]></description><pubDate>Wed, 06 Sep 2023 14:09:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=37405408</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=37405408</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37405408</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Northern summer was hottest on record by a significant margin"]]></title><description><![CDATA[
<p>So the solution would be to continue emitting greenhouse gases and ensure it will also get worse in 30 or 50 years?<p>> The best time to plant a tree was 20 years ago, the second best time is now.<p>Said to be a Chinese proverb, but I haven't extensively verified.<p>The only viable solution right now to combat global warming is to immediately stop emitting greenhouse gases, even if there is a lag effect.</p>
]]></description><pubDate>Wed, 06 Sep 2023 13:43:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=37405021</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=37405021</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37405021</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Do Machine Learning Models Memorize or Generalize?"]]></title><description><![CDATA[
<p>First of all, great blog post with great examples. Reminds me of distill.pub used to be.<p>Second, the article correctly states that typically L2 weight decay is used, leading to a lot of weights with small magnitudes. For models that generalize better, would it then be better to always use L1 weight decay to promote sparsity in combination with longer training?<p>I wonder whether deep learning models that only use sparse fourier features rather than dense linear layers would work better...</p>
]]></description><pubDate>Thu, 10 Aug 2023 17:34:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=37079132</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=37079132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37079132</guid></item><item><title><![CDATA[New comment by SimplyUnknown in "Have attention spans been declining?"]]></title><description><![CDATA[
<p>Not all of these are mutually exclusive, but there is also a fourth option:<p>- ADHD was severely underdiagnosed in the past and got more accurately diagnosed in recent years</p>
]]></description><pubDate>Tue, 25 Jul 2023 08:23:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=36859490</link><dc:creator>SimplyUnknown</dc:creator><comments>https://news.ycombinator.com/item?id=36859490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36859490</guid></item></channel></rss>