<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: leminimal</title><link>https://news.ycombinator.com/user?id=leminimal</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 19 Apr 2026 14:22:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=leminimal" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by leminimal in "Google confirms 'high-friction' sideloading flow is coming to Android"]]></title><description><![CDATA[
<p>Ubuntu Touch so far has the best hardware compatibility for things like camera and battery life. But it also insists on doing a lot of its own thing like using Mir instead of X and click packages. Running programs inside Libertine often crashes for me and is cumbersome. It makes developing for it harder. clickable needs Docker installed just so you can build and run your own apps on the device! Instead of letting you launch things quickly from terminal.<p>It make some things that should be easy on Linux harder. I.e., there's no Firefox + mobile tweaks like other linux mobile OSes, in part because it wants you to use Morphic.<p>But other linux mobile OSes dropped support for Halium/libhybris and even the very few that still have it don't seem to match Ubuntu Touch's level of hardware support.</p>
]]></description><pubDate>Sun, 25 Jan 2026 11:19:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46753080</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=46753080</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46753080</guid></item><item><title><![CDATA[New comment by leminimal in "I’m Not a Robot"]]></title><description><![CDATA[
<p>The true final level is trying to download the level 48 certificate, get a 403 and open it to see<p>neal.fun<p>Verify you are human by completing the action below.<p>Verify you are human<p>neal.fun needs to review the security of your connection before proceeding.<p>alongside the Cloudflare logo.</p>
]]></description><pubDate>Sun, 21 Sep 2025 04:21:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=45320036</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=45320036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45320036</guid></item><item><title><![CDATA[New comment by leminimal in "Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models"]]></title><description><![CDATA[
<p>Thanks, I'm glad to see your time machine caught my comment.<p>I'm using the 32-bit GGUF model from the Google repo, not a different quantized model, so I could have one less source of error. It's hard to tell with LLMs if its a bug. It just gives slightly stranger answers sometimes, but it's not completely gibberish. or incoherent sentences or have extra punctuations like with some other LLM bugs I've seen.<p>Still, I'll wait a few days to build llama.cpp again to see if there are any changes.</p>
]]></description><pubDate>Sun, 25 Feb 2024 19:38:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=39504020</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=39504020</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39504020</guid></item><item><title><![CDATA[New comment by leminimal in "Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models"]]></title><description><![CDATA[
<p>Kudos on your release! I know this was just made available but<p>- Somewhere the README, consider adding the need for a `-DWEIGHT_TYPE=hwy::bfloat16_t` flag for non-sfp. Maybe around step 3.<p>- The README should explicitly say somehere that there's no GPU support (at the moment)<p>- "Failed to read cache gating_ein_0 (error 294)" is pretty obscure. I think even "(error at line number 294)" would be a big improvement when it fails to FindKey.<p>- There's something odd about the 2b vs 7b model. The 2b will claim its trained by Google but the 7b won't. Were these trained on the same data?<p>- Are the .sbs weights the same weights as the GGUF? I'm getting different answers compared to llama.cpp. Do you know of a good way to compare the two? Any way to make both deterministic? Or even dump probability distributions on the first (or any) token to compare?</p>
]]></description><pubDate>Fri, 23 Feb 2024 17:11:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=39483290</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=39483290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39483290</guid></item><item><title><![CDATA[Tell HN: GitHub no longer readable without JavaScript]]></title><description><![CDATA[
<p>A few weeks back, individual files were no longer readable with Javascript off and a much worse browsing experience with JS on. Now its main repo pages too.<p>I've not seen any announcement about these changes.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=39116556">https://news.ycombinator.com/item?id=39116556</a></p>
<p>Points: 85</p>
<p># Comments: 74</p>
]]></description><pubDate>Wed, 24 Jan 2024 12:41:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=39116556</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=39116556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39116556</guid></item><item><title><![CDATA[New comment by leminimal in "Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real"]]></title><description><![CDATA[
<p>Do you have a version that doesn't need Windows and/or a Microsoft account? Or an uncut video of someone using it?</p>
]]></description><pubDate>Tue, 12 Dec 2023 20:56:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=38618388</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=38618388</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38618388</guid></item><item><title><![CDATA[New comment by leminimal in "Stanford A.I. Courses"]]></title><description><![CDATA[
<p>Thanks for making that course. It was on my list of courses to look at since GPT-4 recommended it (with all the caveat that entails :) ). Thanks for also making notebooks available alongside the videos.<p>However, can you point me to the lectures where training happen (and architecture choices, hyperparam selection, and debugging happens.). I'm less familiar with SD but at a quick glance it seems like we're using a pretrained model and implementing bits that will eventually be useful for training but not training a new model, at least in the beginning of the deep dive notebook and first few lessons (starting at part 2, lesson 9).</p>
]]></description><pubDate>Mon, 03 Jul 2023 21:26:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=36579076</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=36579076</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36579076</guid></item><item><title><![CDATA[New comment by leminimal in "Stanford A.I. Courses"]]></title><description><![CDATA[
<p>Thanks for answering, what you wrote here is exactly the sort of thing I'm talking about. Something implicit that's known but not obvious if you look at the first few lectures of the first few courses (or blogs or announcements, etc).<p>You mention bag of tricks and that's indeed one issue but its worse than that because it includes knowing what "silent problems" needs a trick applied to it in the first place!<p>Indeed, despite using vectors everywhere, NN are bad with numerical input encoded as themselves! Its almost like the only kind of variables you can have are fixed size enums. That you then encode into vectors that are as far apart as possible, and unit vectors ("one hot vectors") do this. But that's not quite it and sometimes you can still some meaningful metric on the input that's preserved in the encoding (example: word embeddings). And so its again unclear what you can give it and what you can't.<p>In this toy example, I have an idea of what the shape of the solution is. But generally I do not and would not know to use a base 15 encoding or to send it the last 5 (or 15) outputs as inputs. I know you already sort of addressed this point in your last few paragraphs.<p>I'm still trying out toy problems at the time so it might be a "waste" of your time to troubleshoot these but I'm happy to take you up on the offer. HN doesn't have PMs though.<p>Do you remember when you first learned about the things you are using in your reply here? Was it in a course or just asking someone else who worked on NN for longer? I learned through by googling and finding comment threads like these! But they are not easy to collect or find together.</p>
]]></description><pubDate>Sun, 02 Jul 2023 16:57:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=36563281</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=36563281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36563281</guid></item><item><title><![CDATA[New comment by leminimal in "What is a transformer model? (2022)"]]></title><description><![CDATA[
<p>Thanks for linking me to that post! Its much better at expressing what I'm trying to say. I'll have a careful read of it now.<p>I think I'm still at a step before the overfit. It doesn't converge to a solution on its training data (fit or overfit). And all my data is artificially generated so no cleaning is needed (though choosing a representation still matters). I don't know if that's what you mean by getting the data right or something else. Example problems that "don't work": fizzbuzz, reverse all characters in a sentence.</p>
]]></description><pubDate>Sun, 02 Jul 2023 16:32:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=36563025</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=36563025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36563025</guid></item><item><title><![CDATA[New comment by leminimal in "Stanford A.I. Courses"]]></title><description><![CDATA[
<p>Are there project-based tutorial that talks more about neural net architecture, hyperparameters selection and debugging? Something that walks through getting poor results and make explicit the reasoning for tweaking?<p>When I try to use transformers or any AI thing on a toy problem I come up with, it never works. Even Fizz-Buzz which I thought was easy doesn't work (because division or modulo is apparently hard to represent for NNs). And there's this blackbox of training that's hard to debug into. Yes, for the available resources, if you pick the exact same problem, the exact same NN architecture and exact same hyperparameters, it all works out. But surely they didn't get that on the first try. So what's the tweaking process?<p>Somehow this point isn't often talked about in courses and consequently the ones who've passed this hurdle don't get their experience transferred. I'd follow an entire course on this if it were available. An HN commenter linked me to this<p><a href="https://karpathy.github.io/2019/04/25/recipe/" rel="nofollow noreferrer">https://karpathy.github.io/2019/04/25/recipe/</a><p>which is exactly on point. But it'd be great if it were one or more tutorials with a specific example, wrapped in code and peppered with many failures.</p>
]]></description><pubDate>Sun, 02 Jul 2023 16:13:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=36562843</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=36562843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36562843</guid></item><item><title><![CDATA[New comment by leminimal in "What is a transformer model? (2022)"]]></title><description><![CDATA[
<p>Maybe this is more of a general ML question but I faced it when transformers became popular. Do you know of a project-based tutorial that talks more about neural net architecture, hyperparameters selection and debugging? Something that walks through getting poor results and make explicit the reasoning for tweaking?<p>When I try to use transformers or any AI thing on a toy problem I come up with, it never works. And there's this blackbox of training that's hard to debug into. Yes, for the available resources, if you pick the exact problem, the exact NN architecture and exact hyperparameters, it all works out. But surely they didn't get that on the first try. So what's the tweaking process?</p>
]]></description><pubDate>Sat, 24 Jun 2023 18:37:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=36461462</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=36461462</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36461462</guid></item><item><title><![CDATA[New comment by leminimal in "Now Reddit are coming for the individual personal subreddits"]]></title><description><![CDATA[
<p>I also had trouble with Lemmy's UI and made a different frontend for myself. Here's some screenshots.<p>1. <a href="https://postimg.cc/PPRMGw7k" rel="nofollow noreferrer">https://postimg.cc/PPRMGw7k</a><p>2. <a href="https://postimg.cc/mcNMrzmk" rel="nofollow noreferrer">https://postimg.cc/mcNMrzmk</a><p>3. <a href="https://postimg.cc/7CVG4vLT" rel="nofollow noreferrer">https://postimg.cc/7CVG4vLT</a><p>I was thinking of making it more widely available but didn't know if there'd be enough users to make it worthwhile and if interest in Lemmy would last.</p>
]]></description><pubDate>Thu, 22 Jun 2023 19:07:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=36437129</link><dc:creator>leminimal</dc:creator><comments>https://news.ycombinator.com/item?id=36437129</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36437129</guid></item></channel></rss>