<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: thamer</title><link>https://news.ycombinator.com/user?id=thamer</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 07:48:01 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=thamer" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by thamer in "GitHub DMCA Notices to Anthropic Claude Code Repos"]]></title><description><![CDATA[
<p>This was followed the next day by a partial retraction: <a href="https://github.com/github/dmca/blob/master/2026/04/2026-04-01-anthropic-retraction.md" rel="nofollow">https://github.com/github/dmca/blob/master/2026/04/2026-04-0...</a><p>The original applied to "8.1K" repositories, and now the partial retraction means they're only taking down one repo "and the 96 fork URLs individually listed in the original notice".<p>What was different about these?</p>
]]></description><pubDate>Wed, 01 Apr 2026 23:38:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608092</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=47608092</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608092</guid></item><item><title><![CDATA[New comment by thamer in "Apideck CLI – An AI-agent interface with much lower context consumption than MCP"]]></title><description><![CDATA[
<p>There is not a lot to learn to understand the basics, but maybe one step that's not necessarily documented is the overall workflow and why it's arranged this way. You mentioned the LLM "using web search" and it's a related idea: LLMs don't run web searches themselves when you're using an MCP client, they <i>ask the client</i> to do it.<p>You can think of an MCP server as a process exposing some tools. It runs on your machine communicating via stdin/stdout, or on a server over HTTP. It exposes a list of tools, each tool has a name and named+typed parameters, just like a list of functions in a program. When you "add" an MCP server to Claude Code or any other client, you simply tell this client app on your machine about this list of tools and it will include this list in its requests to the LLM alongside your prompt.<p>When the LLM receives your prompt and decides that one of the tools listed alongside would be helpful to answer you, it doesn't return a regular response to your client but a "tool call" message saying: "call <this tool> with <these parameters>". <i>Your client</i> does this, and sends back the tool call result to the LLM, which will take this into account to respond to your prompt.<p>That's pretty much all there is to it: LLMs can't connect to your email or your GitHub account or anything else; your local apps can. MCP is just a way for LLMs to ask clients to call tools and provide the response.<p>1. You: {message: "hey Claude, how many PRs are open on my GitHub repo foo/bar?", tools: [... github__pr_list(org:string, repo:string) -> [PullRequest], ...] }
2. Anthropic API: {tool_use: {id: 123, name: github__pr_list, input:{org: foo, repo: bar}}}
3. You: {tool_result: {id: 123, content: [list of PRs in JSON]} }
4. Anthropic API: {message: "I see 3 PRs in your repo foo/bar"}<p>that's it.<p>If you want to go deeper the MCP website[1] is relatively accessible, although you definitely don't need to know all the details of the protocol to use MCP. If all you need is to use MCP servers and not blow up your context with a massive list of tools that are included with each prompt, I don't think you need to know much more than what I described above.<p>[1] <a href="https://modelcontextprotocol.io/docs/learn/architecture" rel="nofollow">https://modelcontextprotocol.io/docs/learn/architecture</a></p>
]]></description><pubDate>Mon, 16 Mar 2026 21:04:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47404861</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=47404861</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47404861</guid></item><item><title><![CDATA[New comment by thamer in "Show HN: Deff – Side-by-side Git diff review in your terminal"]]></title><description><![CDATA[
<p>I had tried `delta` a few years ago but eventually went with `diff-so-fancy`[1]<p>The two are kind of similar if I remember correctly, and both offer a lot of config options to change the style and more. I mostly use it for diffs involving long lines since it highlights changes <i>within</i> a line, which makes it easier to spot such edits.<p>I have an alias set in `~/.gitconfig` to pipe the output of `git diff` (with options) to `diff-so-fancy` with `git diffs`:<p><pre><code>    diffs = "!f() { git diff $@ | diff-so-fancy; }; f"

</code></pre>
[1] <a href="https://github.com/so-fancy/diff-so-fancy" rel="nofollow">https://github.com/so-fancy/diff-so-fancy</a></p>
]]></description><pubDate>Thu, 26 Feb 2026 22:00:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47172537</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=47172537</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47172537</guid></item><item><title><![CDATA[New comment by thamer in "Map To Poster – Create Art of your favourite city"]]></title><description><![CDATA[
<p>Yes, the blue and orange dots are from the water and parks Nodes and Ways in the OSM data.<p>It doesn't look like the orange and blue colors are part of the theme definitions, so the rendering library may be using some default values. This is why they are rendered in the same color on images using different theme files.</p>
]]></description><pubDate>Sat, 17 Jan 2026 16:27:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=46659307</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=46659307</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46659307</guid></item><item><title><![CDATA[New comment by thamer in "BERT is just a single text diffusion step"]]></title><description><![CDATA[
<p>The March 2025 blog post by Anthropic titled "Tracing the thoughts of a large language model"[1] is a great introduction to this research, showing how their language model activates features representing concepts that will eventually get connected at some later point as the output tokens are produced.<p>The associated paper[2] goes into a lot more detail, and includes interactive features that help illustrate how the model "thinks" ahead of time.<p>[1] <a href="https://www.anthropic.com/research/tracing-thoughts-language-model" rel="nofollow">https://www.anthropic.com/research/tracing-thoughts-language...</a><p>[2] <a href="https://transformer-circuits.pub/2025/attribution-graphs/biology.html" rel="nofollow">https://transformer-circuits.pub/2025/attribution-graphs/bio...</a></p>
]]></description><pubDate>Mon, 20 Oct 2025 16:48:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=45646027</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=45646027</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45646027</guid></item><item><title><![CDATA[New comment by thamer in "Show HN: MCP Server Installation Instructions Generator"]]></title><description><![CDATA[
<p>> Is there a specific reason, you prefer stio servers over http servers?<p>Yes: the main reason is that I control which applications are configured with the command/args/environment to run the MCP server, instead of exposing a service on my localhost that any process on my computer can connect to (or worse, on my network if it listens on all interfaces).<p>I mostly run MCP servers that I've written, but otherwise most of the third party ones I use are related to software development and AI providers (e.g. context7, Replicate, ElevenLabs…). The last two costs me money when their tools are invoked, so I'm not about to expose them on a port given that auth doesn't happen at the protocol level.</p>
]]></description><pubDate>Mon, 15 Sep 2025 21:51:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45255398</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=45255398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45255398</guid></item><item><title><![CDATA[New comment by thamer in "Show HN: MCP Server Installation Instructions Generator"]]></title><description><![CDATA[
<p>Is this only about remote MCP servers? The instructions all seem to contain a URL, but personally almost all the MCP servers I'm running locally are stdio based and not networked. Are you planning to support those in some way?<p>There's also this new effort by Anthropic to provide a packaging system for MCP servers, called MCPB or MCP Bundles[1]. A bundle is a zip file with a manifest inside it, a bit like how Chrome extensions are structured (maybe VSCode extensions too?).<p>Is this something you're looking to integrate with? I can't say I have seen any MCPB files anywhere just yet, but with a focus on simple installs and given that Anthropic introduced MCP in the first place, I wouldn't be surprised if this new format also got some traction. These archives could contain a lot more data than the small amount you're currently encoding in the URL though[2].<p>[1] <a href="https://www.npmjs.com/package/@anthropic-ai/mcpb" rel="nofollow">https://www.npmjs.com/package/@anthropic-ai/mcpb</a><p>[2] <a href="https://github.com/anthropics/mcpb/blob/main/README.md#directory-structures" rel="nofollow">https://github.com/anthropics/mcpb/blob/main/README.md#direc...</a></p>
]]></description><pubDate>Mon, 15 Sep 2025 19:23:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=45253866</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=45253866</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45253866</guid></item><item><title><![CDATA[New comment by thamer in "More on Apple's Trust-Eroding 'F1 the Movie' Wallet Ad"]]></title><description><![CDATA[
<p>This is what it looks like, the switch is for "Offers & Promotions": <a href="https://i.imgur.com/wodOoBo.jpeg" rel="nofollow">https://i.imgur.com/wodOoBo.jpeg</a><p>From the Wallet app, tap on "…" at the top right, then "notifications".</p>
]]></description><pubDate>Sun, 29 Jun 2025 20:09:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44415995</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=44415995</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44415995</guid></item><item><title><![CDATA[New comment by thamer in "Highlights from the Claude 4 system prompt"]]></title><description><![CDATA[
<p>They're not just from AI-generated text. Some of us humans use en dashes and em dashes in the right context, since they're easy to type on macOS: alt+hyphen and alt+shift+hyphen respectively.<p>On both iOS and modern Android I believe you can access them with a long press on hyphen.</p>
]]></description><pubDate>Tue, 27 May 2025 19:23:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44109934</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=44109934</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44109934</guid></item><item><title><![CDATA[New comment by thamer in "Show HN: Real-time AI Voice Chat at ~500ms Latency"]]></title><description><![CDATA[
<p>Does Dia support configuring voices now? I looked at it when it was first released, and you could only specify [S1] [S2] for the speakers, but not how they would sound.<p>There was also a very prominent issue where the voices would be sped up if the text was over a few sentences long; the longer the text, the faster it was spoken. One suggestion was to split the conversation into chunks with only one or two "turns" per speaker, but then you'd hear two voices then two more, then two more… with no way to configure any of it.<p>Dia looked cool <i>on the surface</i> when it was released, but it was only a demo for now and not at all usable for any real use case, even for a personal app. I'm sure they'll get to these issues eventually, but most comments I've seen so far recommending it are from people who have not actually used it or they would know of these major limitations.</p>
]]></description><pubDate>Mon, 05 May 2025 22:27:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43900059</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43900059</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43900059</guid></item><item><title><![CDATA[New comment by thamer in "Someone at YouTube needs glasses"]]></title><description><![CDATA[
<p>The following CSS equivalent worked for me, using the "Custom CSS by Denis" Chrome extension[1]:<p><pre><code>    ytd-rich-grid-renderer div#contents {
      /* number of video thumbnails per row */
      --ytd-rich-grid-items-per-row: 5 !important;
    
      /* number of Shorts per row in its dedicated section */
      --ytd-rich-grid-slim-items-per-row: 6 !important;
    }

</code></pre>
I first tried it with the "User JavaScript and CSS" extension, but somehow it didn't seem able to inject CSS on YouTube. Even a simple `html { border: 5px solid red; }` would not show anything, while I could see it being applied immediately with the "Denis" CSS extension.<p>If someone can recommend a better alternative for custom CSS, I'd be interested to hear it. I guess Tampermonkey could work, if you have that.<p>[1] <a href="https://chromewebstore.google.com/detail/custom-css-by-denis/cemphncflepgmgfhcdegkbkekifodacd" rel="nofollow">https://chromewebstore.google.com/detail/custom-css-by-denis...</a></p>
]]></description><pubDate>Wed, 30 Apr 2025 19:45:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=43849801</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43849801</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43849801</guid></item><item><title><![CDATA[New comment by thamer in "Lvgl: Embedded graphics library to create beautiful UIs"]]></title><description><![CDATA[
<p>The main alternative to LVGL seems to be TouchGFX[1], at least that's the one I've seen mentioned the most in conversations around UI libraries for microcontrollers.<p>As you wrote these aren't made for desktop apps, but you can use desktop apps to help with UI development using these libraries.<p>For LVGL there's SquareLine Studio[2], I used it a few years ago and it was helpful. For TouchGFX there's TouchGFXDesigner[3], I haven't used it myself and it seems to run only on Windows.<p>[1] <a href="https://touchgfx.com/" rel="nofollow">https://touchgfx.com/</a><p>[1] <a href="https://squareline.io/" rel="nofollow">https://squareline.io/</a><p>[2] <a href="https://www.st.com/en/development-tools/touchgfxdesigner.html" rel="nofollow">https://www.st.com/en/development-tools/touchgfxdesigner.htm...</a></p>
]]></description><pubDate>Sun, 30 Mar 2025 04:17:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43521292</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43521292</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43521292</guid></item><item><title><![CDATA[New comment by thamer in "Lvgl: Embedded graphics library to create beautiful UIs"]]></title><description><![CDATA[
<p>This is for screens usually controlled by microcontrollers, nothing running close to an operating system like Linux and rarely coming with a GPU.<p>See for examples ILI9341 or SSD1306 displays[1] or integrated boards with (often) an ESP32 microcontroller and a display attached[2].<p>[1] displays: <a href="https://www.google.com/search?q=SSD1306+OR+ILI9341+display&udm=2" rel="nofollow">https://www.google.com/search?q=SSD1306+OR+ILI9341+display&u...</a><p>[2] integrated: <a href="https://www.aliexpress.us/w/wholesale-ESP32-LVGL.html?spm=a2g0o.productlist.search.0" rel="nofollow">https://www.aliexpress.us/w/wholesale-ESP32-LVGL.html?spm=a2...</a></p>
]]></description><pubDate>Sun, 30 Mar 2025 04:08:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43521252</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43521252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43521252</guid></item><item><title><![CDATA[New comment by thamer in "Send Data with Sound"]]></title><description><![CDATA[
<p>It's probably not slower than words, the rate for English pronunciation is something like 150-200 words per minute only.<p>That said, the "gibberlink" demo is definitely much slower than even a 28.8k modem (that's kilobit). It sounds cool because we can't understand it and it seems kinda fast, but this is a terribly inefficient way for machines to communicate. It's hard to say how fast they're exchanging data from just listening, but it can't be much more than ~100 bits/sec if I had to guess.<p>Even in the audible range you could absolutely go hundreds of times faster, but it's much easier to train an LLM that has some audio input capabilities if you keep this low rate and likely very distinct symbols, rather than implementing a proper modem.<p>But why even have to use a modem though? Limiting communication to audio-only is a severe restriction. When AIs are going to "call" other AIs, they will use APIs… not ancient phone lines.</p>
]]></description><pubDate>Mon, 03 Mar 2025 21:27:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43246983</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43246983</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43246983</guid></item><item><title><![CDATA[Auditing AI Bias: The DeepSeek Case]]></title><description><![CDATA[
<p>Article URL: <a href="https://dsthoughts.baulab.info/">https://dsthoughts.baulab.info/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43141490">https://news.ycombinator.com/item?id=43141490</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 22 Feb 2025 18:12:31 +0000</pubDate><link>https://dsthoughts.baulab.info/</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43141490</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43141490</guid></item><item><title><![CDATA[New comment by thamer in "Record-breaking neutrino is most energetic ever detected"]]></title><description><![CDATA[
<p>The kilogram is the base unit of mass in the International System of Units (SI): <a href="https://en.wikipedia.org/wiki/SI_base_unit" rel="nofollow">https://en.wikipedia.org/wiki/SI_base_unit</a><p>Time is in seconds, length in meters, temperature in kelvin, etc. A unit of energy like a joule is then defined using these base units, so 1 joule is 1⋅kg⋅m^2⋅s^-2.</p>
]]></description><pubDate>Thu, 13 Feb 2025 04:48:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=43032769</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43032769</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43032769</guid></item><item><title><![CDATA[New comment by thamer in "YouTube's New Hue"]]></title><description><![CDATA[
<p>> We change the product constantly — we’re talking over 1,700 updates per year!<p>Good job, the new red is a huge improvement.<p>Meanwhile the YouTube comment sections are still getting pummeled by bots, trying to scam viewers with fake crypto offerings (90%+ involving an "Elon Musk giveaway") or writing entire threads praising the great investment returns from a genius trader named "Mr Definitely A. RealName" who operates only on WhatsApp.<p>Take a look at the comments under this video for example, all the references to AMZ6OP are for a scam crypto token that they pretend is being launched by Amazon: <a href="https://www.youtube.com/watch?v=JRd_wNHJG4o" rel="nofollow">https://www.youtube.com/watch?v=JRd_wNHJG4o</a>.<p>I'm having doubts even reposting this link… <i>please</i> do not believe for a second that any of these claims are real.<p>I guess changing red to red-ish magenta was apparently more important than addressing the widespread issues that have been plaguing YouTube for years.</p>
]]></description><pubDate>Thu, 13 Feb 2025 04:31:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=43032687</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43032687</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43032687</guid></item><item><title><![CDATA[New comment by thamer in "Record-breaking neutrino is most energetic ever detected"]]></title><description><![CDATA[
<p>It took a few tries, but I got Wolfram Alpha to compute its velocity compared to the speed of light[1].<p>I started with:<p><pre><code>    sqrt(1-((1/(1+120 PeV / (neutrino mass * c^2)))^2))
</code></pre>
but it simply said "data not available". So I changed:<p><pre><code>    120 PeV to 120e15 * 1.602176634e-19 kg m^2 s^-2
    neutrino mass to 1.25e-37kg
    speed of light to 299792458 m/s
</code></pre>
and finally it gave a numeric result:<p><pre><code>    0.999999999999999999999999999999999999829277971
</code></pre>
(that's 36 nines in a row). Pasting it in Google says the value is "1", which is… not far off.<p>If you want details about the way this is calculated, I dug up the formula from an article I'd written about particle velocities in the LHC, back in 2008[2]. For comparison, their 7 TeV protons were going at 0.999999991 × c.<p>[1] <a href="https://www.wolframalpha.com/input?i=sqrt%281-%28%281%2F%281%2B%28120e15*1.602176634e-19+kg+m%5E2+s%5E-2%29+%2F+%281.25e-37kg++*+%28299792458+m%2Fs%29%5E2%29%29%29%5E2%29%29" rel="nofollow">https://www.wolframalpha.com/input?i=sqrt%281-%28%281%2F%281...</a><p>[2] <a href="https://log.kv.io/post/2008/09/12/lhc-how-fast-do-these-protons-go" rel="nofollow">https://log.kv.io/post/2008/09/12/lhc-how-fast-do-these-prot...</a></p>
]]></description><pubDate>Thu, 13 Feb 2025 02:42:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=43032111</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=43032111</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43032111</guid></item><item><title><![CDATA[New comment by thamer in "Software engineer pay heatmap for Europe"]]></title><description><![CDATA[
<p>It's not just by region, but at the city level too. There are often significant differences between salaries in capital cities vs others, as one would expect.<p>The cost of living is different, larger companies in major population centers have more capital, etc.</p>
]]></description><pubDate>Tue, 14 Jan 2025 19:39:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=42702666</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=42702666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42702666</guid></item><item><title><![CDATA[New comment by thamer in "Botan: Crypto and TLS for Modern C++"]]></title><description><![CDATA[
<p>For what it's worth, I've used Botan in a personal project where I needed a few hashing algorithms to verify file integrity (SHA-1, SHA-256, even MD5), and also used Botan's base 64 decoder in the same project.<p>I found its modern "style" pleasant to write code for, and easy to integrate with good docs. That said, I did notice the large number of algorithms as others have pointed out, and I'm not sure I'd use it for more serious purposes when libsodium is well-established. It certainly wouldn't be an obvious choice.<p>But to quickly add support for common hash algorithms in a small project, I thought it was a good option and enjoyed its API design and simplicity.</p>
]]></description><pubDate>Thu, 19 Dec 2024 20:09:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=42465220</link><dc:creator>thamer</dc:creator><comments>https://news.ycombinator.com/item?id=42465220</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42465220</guid></item></channel></rss>