<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: flutetornado</title><link>https://news.ycombinator.com/user?id=flutetornado</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 23 Apr 2026 17:19:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=flutetornado" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by flutetornado in "The future of version control"]]></title><description><![CDATA[
<p>I ended up creating a personal vim plugin for merges one night because of a frustrating merge experience and never being able to remember what is what. It presents just two diff panes at top to reduce the cognitive load and a navigation list in a third split below to switch between diffs or final buffer (local/remote, base/local, base/remote and final). The list has branch names next to local/remote so you always know what is what. And most of the time the local/remote diff is what I am interested in so that’s what it shows first.</p>
]]></description><pubDate>Sun, 22 Mar 2026 20:12:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47481645</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=47481645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47481645</guid></item><item><title><![CDATA[New comment by flutetornado in "Can I run AI locally?"]]></title><description><![CDATA[
<p>My experience with qwen3.5 9b has not been the same. It’s definitely good at agentic responses but it hallucinates a lot. 30%-50% of the content it generated for a research task (local code repo exploration) turned out to be plain wrong to the extent of made up file names and function names. I ran its output through KimiK2 and asked it to verify its output - which found out that much of what it had figured out after agentic exploration was plain wrong. So use smaller models but be very cautious how much you depend on their output.</p>
]]></description><pubDate>Fri, 13 Mar 2026 23:16:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47371290</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=47371290</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47371290</guid></item><item><title><![CDATA[New comment by flutetornado in "Claude Skills"]]></title><description><![CDATA[
<p>There are several useful ways of engineering the context used by LLMs for different use cases.<p>MCP allows anybody to extend their own LLM application's context and capabilities using pre-built *third party* tools.<p>Agent Skills allows you to let the LLM enrich and narrow down it's own context based on the nature of the task it's doing.<p>I have been using a home grown version of Agent Skills for months now with Claude in VSCode, using skill files and extra tools in folders for the LLM to use. Once you have enough experience writing code with LLMs, you will realize this is a natural direction to take for engineering the context of LLMs. Very helpful in pruning unnecessary parts from "general instruction files" when working on specific tasks - all orchestrated by the LLM itself. And external tools for specific tasks (such as finding out which cell in a jupyter notebook contains the code that the LLM is trying to edit, for example) make LLMs a lot more accurate and efficient, efficient because they are not burning through precious tokens to do the same and accurate because the tools are not stochastic.<p>With Claude Skills now I don't need to maintain my home grown contraption. This is a welcome addition!</p>
]]></description><pubDate>Fri, 17 Oct 2025 07:49:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=45614237</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=45614237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45614237</guid></item><item><title><![CDATA[New comment by flutetornado in "Itter.sh – Micro-Blogging via Terminal"]]></title><description><![CDATA[
<p>I prefer it because it forces distillation to core ideas, consumable quickly. Busy people have too little time to read too much verbiage.</p>
]]></description><pubDate>Fri, 09 May 2025 14:37:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=43937280</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=43937280</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43937280</guid></item><item><title><![CDATA[New comment by flutetornado in "From: Steve Jobs. "Great idea, thank you.""]]></title><description><![CDATA[
<p>Everyone in my entire team - best of engineering as well as every manager left. Underpaying and over subscribing people has become a hallmark over there - it's just a body shop now. Engineers are just numbers on a sheet, to be exploited, chewed and cast aside when they eventually burnout. Upper management has no vision and everyone's constantly firefighting and struggling to catch up with competitors who had long term vision to invest in engineering teams, tooling and infrastructure to scale up the products and people. They want to do in 2 years what took Google and Amazon a couple of decades. Result post-HPE: poor quality, unscalable, cobbled together, barely functional codebase. Before, the startup I worked for had a well balanced rare combination of high performance, modular and well architected codebase. Later the constant push to ship as fast as possible to catch up with competition, completely destroyed the whole thing - teams, codebase and infrastructure. All because they only know how to react and have no idea how to stay ahead of the curve. Buying startups has become their only means of survival as talent stays away from their brand and the only way to justify value to shareholders is to jump from one rock to another, hoping the new one will rocket them away from the black hole they are spiraling into - all they manage to do is stick to the new rock and pull it with them as fast as they were going into the hole they will eventually vaporize in.</p>
]]></description><pubDate>Fri, 09 May 2025 01:48:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43933113</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=43933113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43933113</guid></item><item><title><![CDATA[New comment by flutetornado in "From: Steve Jobs. "Great idea, thank you.""]]></title><description><![CDATA[
<p>I’d do that every time I get a chance! Ex-HPE black label on my resume from a startup I used to work in that they bought. That company is a complete horror show.</p>
]]></description><pubDate>Thu, 08 May 2025 23:12:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=43932275</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=43932275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43932275</guid></item><item><title><![CDATA[New comment by flutetornado in "Using bad hardware: why I work in the terminal (2024)"]]></title><description><![CDATA[
<p>I think the performance in terminal apps could come from the fact that terminals can use pre rendered, pre cached in GPU, text glyphs covering a fixed size small grid (of glyph sized blocks instead of pixels.) At least that’s what I have wondered - because the terminal UI experience has been always a lot more slick compared to heavy GUI based programs. Feel free to correct me if someone has done actual performance analysis.</p>
]]></description><pubDate>Sun, 13 Apr 2025 16:24:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43673922</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=43673922</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43673922</guid></item><item><title><![CDATA[New comment by flutetornado in "Ask HN: Why hasn’t AMD made a viable CUDA alternative?"]]></title><description><![CDATA[
<p>I was able to compile ollama for AMD Radeon 780M GPUs and I use it regularly on my AMD mini-PC which cost me 500$. It does require a bit more work. I get pretty decent performance with LLMs - just making a qualitative statement as I didn't do any formal testing, but I got comparable performance vibes as a NVIDIA 4050 GPU laptop I use as well.<p><a href="https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU" rel="nofollow">https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M...</a></p>
]]></description><pubDate>Tue, 01 Apr 2025 18:00:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43549724</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=43549724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43549724</guid></item><item><title><![CDATA[New comment by flutetornado in "QwQ-32B: Embracing the Power of Reinforcement Learning"]]></title><description><![CDATA[
<p>My understanding is that top_k and top_p are two different methods of decoding tokens during inference. top_k=30 considers the top 30 tokens when selecting the next token to generate and top_p=0.95 considers the top 95 percentile. You should need to select only one.<p><a href="https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">https://github.com/ollama/ollama/blob/main/docs/modelfile.md...</a><p>Edit: Looks like both work together. "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)"<p>Not quite sure how this is implemented - maybe one is preferred over the other when there are enough interesting tokens!</p>
]]></description><pubDate>Thu, 06 Mar 2025 01:10:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=43274970</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=43274970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43274970</guid></item><item><title><![CDATA[New comment by flutetornado in "SVDQuant: 4-Bit Quantization Powers 12B Flux on a 16GB 4090 GPU with 3x Speedup"]]></title><description><![CDATA[
<p>GPU workloads are either compute bound (floating point operations) or memory bound (bytes being transferred across memory hierarchy.)<p>Quantizing in general helps with the memory bottleneck but does not help in reducing computational costs, so it’s not as useful for improving performance of diffusion models, that’s what it’s saying.</p>
]]></description><pubDate>Sat, 09 Nov 2024 12:15:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=42093992</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=42093992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42093992</guid></item><item><title><![CDATA[New comment by flutetornado in "Vega – A declarative language for interactive visualization designs"]]></title><description><![CDATA[
<p>Altair is superb. Have used it a lot and it has become my default visualization library. Works in VSCode and Jupyter Lab. The author has a great workshop video on youtube for people interested in altair. I especially like the ability to connect plots with each other so that things such as selecting a range in one plot changes the visualization in the connected plot.<p>One possible downside is that it embeds the entire chart data as json in the notebook itself, unless you are using server side data tooling, which is possible with additional data servers, although I have not used it, so cannot say how effective it is.<p>For simple plots its pretty easy to get started and you could do pretty sophisticated inter plot visualizations with it as you get better with it and understand its nuances.</p>
]]></description><pubDate>Fri, 23 Aug 2024 17:24:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=41330964</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=41330964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41330964</guid></item><item><title><![CDATA[New comment by flutetornado in "What's new in Emacs 29.1"]]></title><description><![CDATA[
<p>Try neogit plugin for neovim. Its a work in progress.<p><a href="https://github.com/NeogitOrg/neogit">https://github.com/NeogitOrg/neogit</a></p>
]]></description><pubDate>Fri, 08 Sep 2023 09:39:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=37431537</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=37431537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37431537</guid></item><item><title><![CDATA[New comment by flutetornado in "WASM: Big deal or little deal?"]]></title><description><![CDATA[
<p>Yes, on a related note Neovim just got support for WASM plugins and apparently WASM is 100% faster than Lua (neovim's default plugin language runtime) for this use case according to the author. So now plugins in any language that can be compiled to WASM are possible.<p>Edit: <a href="https://github.com/Borwe/wasm_nvim">https://github.com/Borwe/wasm_nvim</a></p>
]]></description><pubDate>Tue, 05 Sep 2023 08:51:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=37389367</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=37389367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37389367</guid></item><item><title><![CDATA[New comment by flutetornado in "Neovim 0.8 Released"]]></title><description><![CDATA[
<p>I found this very helpful when switching to nvim recently. Kudos to the author for having the nvim config on github and making videos explaining how he set it all up:<p><a href="https://github.com/LunarVim/Neovim-from-scratch" rel="nofollow">https://github.com/LunarVim/Neovim-from-scratch</a><p><a href="https://www.youtube.com/watch?v=ctH-a-1eUME&list=PLhoH5vyxr6Qq41NFL4GvhFp-WLd5xzIzZ" rel="nofollow">https://www.youtube.com/watch?v=ctH-a-1eUME&list=PLhoH5vyxr6...</a></p>
]]></description><pubDate>Fri, 30 Sep 2022 18:52:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=33039051</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=33039051</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33039051</guid></item><item><title><![CDATA[New comment by flutetornado in "The Jupyter+Git problem is now solved"]]></title><description><![CDATA[
<p>JupyterLab<p>JupyterHub<p>Jupytext - converts ipynb to py<p>Nbstripout - strips all output from ipynb<p>Nbmerge - resolves merge conflicts<p>Vim-jupytext - vim plugin to auto convert ipynb to py<p>Papermill - parameterize notebooks<p>Git<p>Pandas, Altair - data analysis / Visualization<p>Phabricator - code reviews of notebooks<p>Vimdiff + vim-jupytext - diffs in terminal<p>This solved all my jupyter problems.</p>
]]></description><pubDate>Fri, 26 Aug 2022 09:44:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=32605193</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=32605193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32605193</guid></item><item><title><![CDATA[New comment by flutetornado in "Study finds link between 'forever chemicals' in cookware and liver cancer"]]></title><description><![CDATA[
<p>I have been using oil infused Ceramic cookware from Calphalon (Classic). The handles are made of stainless steel which stays cool despite all the cooking. Quite convenient. Works very well as a replacement for Teflon based non stick cookware I used to use earlier. It's not perfect but using a bit of butter or oil when cooking gives good results.<p>More details about various varieties here:
<a href="https://www.calphalon.com/supportShow?cfid=cookware-use-and-care" rel="nofollow">https://www.calphalon.com/supportShow?cfid=cookware-use-and-...</a></p>
]]></description><pubDate>Fri, 12 Aug 2022 20:35:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=32443572</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=32443572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32443572</guid></item><item><title><![CDATA[New comment by flutetornado in "Ask HN: Best practices for editing remote code locally?"]]></title><description><![CDATA[
<p>This is the best solution I have found so far being a long time vim user. ssh + tmux + vim is all you need. Least amount of potential problems in this setup. vim runs locally on the server where "remote development" is being done, so there is very little friction in terms of getting it to do what you'd like it to do given all the plugins you can add to it for beefing it up.<p>I have heard stories from coworkers about work being lost when editing is done remotely instead of locally, so never ventured that way.</p>
]]></description><pubDate>Mon, 11 Apr 2022 14:53:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=30989636</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=30989636</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30989636</guid></item><item><title><![CDATA[New comment by flutetornado in "Show HN: Supernotes 2 – a fast, Markdown notes app for journalling and sharing"]]></title><description><![CDATA[
<p>For a lot of things such as tables, lists, adding links, etc, markdown allows you to do it "inline" while typing, instead of forcing out of band operations.</p>
]]></description><pubDate>Wed, 23 Feb 2022 16:23:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=30442766</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=30442766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30442766</guid></item><item><title><![CDATA[New comment by flutetornado in "There are too many video games"]]></title><description><![CDATA[
<p>If space colonization leads to mining resources from asteroids and offloading the environmental costs of such mining from earth or get us building materials strong enough to give us a space elevator or space factories sending their waste into the sun, you wouldn’t be saying this perhaps.</p>
]]></description><pubDate>Wed, 02 Feb 2022 23:59:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=30186208</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=30186208</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30186208</guid></item><item><title><![CDATA[New comment by flutetornado in "The Block Protocol"]]></title><description><![CDATA[
<p>I have always wanted Kindle to support a lot more features than what it does today. Digital books could be a lot more than what they are today. Want to embed a python REPL in a python book for users to test things right in the book, from a common source? Want to embed function graphs in a Math book from a common source? How about a theorem prover right there in the book? How about a Wolfram engine embedded in there to solve arbitrary calculus problems? How about a section in another book - license it from the other publisher and embed it into your book? Embedding a video, a web page, a 3D visualization engine, etc should all be possible in a digital book.<p>Looks like this maybe able to achieve all of this. Just need to get Amazon Kindle team on board with this.</p>
]]></description><pubDate>Thu, 27 Jan 2022 21:29:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=30106994</link><dc:creator>flutetornado</dc:creator><comments>https://news.ycombinator.com/item?id=30106994</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=30106994</guid></item></channel></rss>