<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: artski</title><link>https://news.ycombinator.com/user?id=artski</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 03:02:30 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=artski" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: Volt – Fast, Git-native API client written in Zig (Postman alternative)]]></title><description><![CDATA[
<p>Hey HN,<p>I wanted to build something real in Zig, and I was annoyed that Postman
needed an account and an internet connection just to send a GET request.
So I figured I'd make something I would actually use.<p>It kind of snowballed from there. 50,000+ lines later, it does requests,
test assertions, collections, environments, CI/CD reports, import/export,
and has a browser-based Web UI (`volt ui`). All in a single binary with
zero external dependencies.<p>Some numbers:<p><pre><code>  - Binary size: ~4 MB (Postman: ~500 MB)
  - Startup: 42ms (Postman: 3-8 seconds)
  - RAM idle: ~5 MB (Postman: 300-800 MB)
  - Dependencies: 0
  - Account required: No
</code></pre>
It uses plain-text .volt files that live in your git repo. Your API tests
are just files, versioned with your code.<p>For CI, you copy one binary and run `volt test`. No npm, no Docker, no runtime.<p>Why Zig? Honestly, I wanted an excuse to write a lot of Zig. But the
explicit allocators, comptime, and cross-compilation from one machine made
it practical to ship Linux/macOS/Windows from a single codebase with no
external packages.<p>There's a CLI, a TUI with tabs and search, and a Web UI. It's not perfect
but I use it daily and it hasn't crashed on me yet.<p>Site: <a href="https://api-volt.com" rel="nofollow">https://api-volt.com</a>
Repo: <a href="https://github.com/volt-api/volt" rel="nofollow">https://github.com/volt-api/volt</a>
Benchmarks: <a href="https://github.com/volt-api/volt/blob/main/BENCHMARKS.md" rel="nofollow">https://github.com/volt-api/volt/blob/main/BENCHMARKS.md</a><p>Happy to answer questions about the Zig implementation, that's the part
I'm most excited to talk about honestly.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47323484">https://news.ycombinator.com/item?id=47323484</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 10 Mar 2026 14:10:13 +0000</pubDate><link>https://github.com/volt-api/volt</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=47323484</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47323484</guid></item><item><title><![CDATA[New comment by artski in "As Nuclear Power Makes a Comeback, South Korea Emerges a Winner"]]></title><description><![CDATA[
<p>I don’t know how South Korea works politically but Ik an example of Malaysia - they spent years on their nuclear road map -> new administration comes in who hates nuclear it gets scrapped. And now they are back to one that doesn’t mind it and have to start from zero</p>
]]></description><pubDate>Sun, 18 May 2025 10:08:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44020254</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=44020254</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44020254</guid></item><item><title><![CDATA[New comment by artski in "Trump wants coal to power AI data centers"]]></title><description><![CDATA[
<p>Yeah I think the world is screwed. These aren't things you can shut down instantly without losing a bunch of money/the time not spent building better alternatives - all these projects have long lead times.</p>
]]></description><pubDate>Sun, 18 May 2025 08:45:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=44019929</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=44019929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44019929</guid></item><item><title><![CDATA[New comment by artski in "Maybe we should be designing for machines too"]]></title><description><![CDATA[
<p>I’ve been thinking a lot about how new features and systems are built lately, especially with everything that’s happened over the past few years. It’s interesting how most of the AI stuff we see in products today is basically tacked on after the fact to trace the trend - some more valuable than others depending on how forced it feels. You build your tool, your dashboard, your app, and then you try to layer in some sort of automation or “assistant” once it’s already working. And I get why - it makes sense when you’ve already got an established thing and you want to enhance it without breaking what people rely on. I did a main writeup in substack about it but figured I'd expand the discussion.<p>But I wonder if we’re now at a point where that can’t really be the default anymore. If you’re building something new in 2025, whether it’s a product, internal tool, or even just a feature, maybe it should be designed from the ground up to be usable not just by a human clicking buttons, but by another system entirely. A model, a script, an orchestration layer - whatever you want to call it.<p>It’s not about being “AI-first” in the marketing sense. It’s more about thinking: can this thing I’m building be used by something else without needing a human in the loop? Can it expose its core functions as callable actions? Can its state be inspected in a structured way? Can it be reasoned about or composed into a workflow? That kind of thinking, I think, will become the baseline expectation - not just a “nice to have.”<p>It’s also not really that complicated. Most of the time it just means thinking in terms of well-structured APIs, surfacing decisions and logs clearly, and not baking critical functionality too deeply into the front-end. But the shift is mental. You start designing features as tools - not just user flows - and that opens up all kinds of new possibilities. For example, someone might plug your service into a broader workflow and have it run unattended, or an LLM might be able to introspect your system state and take useful actions, or you can just let users automate things with much less effort.<p>There’s been some early but interesting work around formalising how systems expose their capabilities to automation layers. One effort I’ve been keeping an eye on is the MCP. A quick summary is basically that It aims to let a service describe what it can do - what functions it offers, what inputs it accepts, what guarantees or permissions it requires -in a way that downstream agents or orchestrators can understand without brittle hand-tuned wrappers. It’s still early days, but if this sort of approach gains traction, I can imagine a future where this kind of “self-describing system contract” becomes part of the baseline for interoperability. Kind of like how APIs used to be considered secondary, and now they are the product. It’s not there yet, but if autonomous coordination becomes more common, this may quietly become essential infrastructure.<p>I don’t know. Just a thought I’ve been chewing on. Curious what other people think. Is anyone building things with this mindset already or are there good examples out there of products or platforms that got this right from day one?</p>
]]></description><pubDate>Sun, 18 May 2025 08:32:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=44019873</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=44019873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44019873</guid></item><item><title><![CDATA[Maybe we should be designing for machines too]]></title><description><![CDATA[
<p>Article URL: <a href="https://substack.com/sign-in">https://substack.com/sign-in</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44019872">https://news.ycombinator.com/item?id=44019872</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 18 May 2025 08:32:06 +0000</pubDate><link>https://substack.com/sign-in</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=44019872</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44019872</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Yeah I thought about this and maybe down the line, but wanted to start with the pure statistics part as the base so it's as little of a black box as possible.</p>
]]></description><pubDate>Tue, 13 May 2025 13:38:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=43972844</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43972844</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43972844</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Crazy how far people go for these things tbh.</p>
]]></description><pubDate>Mon, 12 May 2025 23:26:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=43968393</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43968393</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43968393</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>For each spike it samples the users from that spike (I set it to a high enough value currently it essentially gets all of them  for 99.99% of repos - though that should be optimised  so  it's faster but just figured I will just grab every single one for now whilst building it).  It checks the users who caused this spike for signs of being "fake accounts".</p>
]]></description><pubDate>Mon, 12 May 2025 23:23:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43968379</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43968379</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43968379</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>It's a project I'm making purely for myself and I like to share what I make - sorry I didn't put up most effort in the commit messages, will not do that again.</p>
]]></description><pubDate>Mon, 12 May 2025 21:47:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=43967787</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43967787</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43967787</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Well I initially planned to use GraphQL and started to implement it, but switched to REST for now as it's still not fully complete, just to keep things simpler while I iterate and the fact that it's not required currently. I’ll bring GraphQL back once I’ve got key cycling in place and things are more stable. As for the rate limit, I’ve been tweaking things manually to avoid hitting it constantly which I did to an extent—that’s actually why I want to add key rotation... and I am allowed to leave comments for myself for a work in progress no? or does everything have to be perfect from day one?<p>You would assume if it was pure ai generated it would have the correct rate limit in the comments and the code .... but honestly I don't care and yeah I ran the read me through GPT to 'prettify it'. Arrest me.</p>
]]></description><pubDate>Mon, 12 May 2025 21:43:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=43967758</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43967758</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43967758</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>It would still count as "trustworthy" just wouldnt come out to 100/100 :(.</p>
]]></description><pubDate>Mon, 12 May 2025 20:45:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=43967278</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43967278</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43967278</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Not a bad idea tbh, maybe an additional how long issues are left open, would be a good idea. Though yeh thats why I was contemplating of not necessarily highlighting the actual number and more have a range e.g. 80-100 is good, 50-70 Moderate and so on.</p>
]]></description><pubDate>Mon, 12 May 2025 19:26:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=43966659</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43966659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43966659</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>I haven't done that before so it would be a small learning curve for me to figure that out. Feel free to make a pull request.</p>
]]></description><pubDate>Mon, 12 May 2025 19:09:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=43966512</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43966512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43966512</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Lol yeah tbh - I just made it without really thinking of an audience, just was looking for a project to work on till I saw the paper and figured it would be cool to check it out on some repositories out there. That part is just me asking gpt to make the read me better.</p>
]]></description><pubDate>Mon, 12 May 2025 16:35:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=43964890</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43964890</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43964890</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Yeah to be fair would be great, sometimes just giving a nudge and showing people want these features is the first step to getting an official integration.</p>
]]></description><pubDate>Mon, 12 May 2025 16:23:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43964759</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43964759</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43964759</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Fair take—it's definitely context-dependent. In some cases, solo-maintainer projects can be great, especially if they’re stable or purpose-built. But from a trust and maintenance standpoint, it’s worth flagging as a signal: if 90% of commits are from one person who’s now inactive, it could mean slow responses to bugs or no updates for security issues. Doesn’t mean the project is bad—just something to consider alongside other factors.<p>Heuristics are never perfect and it's all iterative but it's all about understanding the underlying assumptions and taking the knowledge you get out of it with your own context. Probably could enhance it slightly by a run through an LLM with a prompt but I prefer to keep things purely statistical for now.</p>
]]></description><pubDate>Mon, 12 May 2025 16:19:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43964713</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43964713</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43964713</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Basically what I mean by it is for example a repository appears to be under a permissive license like MIT, Apache, or BSD, but actually includes code that’s governed by a much stricter or viral license—like GPL or AGPL—often buried in a subdirectory, dependency, or embedded snippet. The problem is, if you reuse or build on that code assuming it’s fully permissive, you could end up violating the terms of the stricter license without realising it. It’s a trap because the original authors might have mixed incompatible licenses, knowingly or not, and the legal risk then falls on downstream users.  So yeah essentially a plain old license violation which are relatively easy to miss or not think about</p>
]]></description><pubDate>Mon, 12 May 2025 16:11:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43964614</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43964614</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43964614</guid></item><item><title><![CDATA[New comment by artski in "Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps"]]></title><description><![CDATA[
<p>Yeah to be fair I need to clean it up, was stuck in the testing diff strategies and making it work and just wanted to get feedback asap before moving on to the next step (didn't want to spend too much time on something and turns out I was wrong about something badly) - next step is to get it all cleaned up.</p>
]]></description><pubDate>Mon, 12 May 2025 15:20:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=43964051</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43964051</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43964051</guid></item><item><title><![CDATA[Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps]]></title><description><![CDATA[
<p>When I came across a study that traced 4.5 million fake GitHub stars, it confirmed a suspicion I’d had for a while: stars are noisy. The issue is they’re visible, they’re persuasive, and they still shape hiring decisions, VC term sheets, and dependency choices—but they say very little about actual quality.<p>I wrote StarGuard to put that number in perspective based on my own methodology inspired with what they did and to fold a broader supply-chain check into one command-line run.<p>It starts with the simplest raw input: every starred_at timestamp GitHub will give. It applies a median-absolute-deviation test to locate sudden bursts. For each spike, StarGuard pulls a random sample of the accounts behind it and asks: how old is the user? Any followers? Any contribution history? Still using the default avatar? From that, it computes a Fake Star Index, between 0 (organic) and 1 (fully synthetic).<p>But inflated stars are just one issue. In parallel, StarGuard parses dependency manifests or SBOMs and flags common risk signs: unpinned versions, direct Git URLs, lookalike package names. It also scans licences—AGPL sneaking into a repo claiming MIT, or other inconsistencies that can turn into compliance headaches.<p>It checks contributor patterns too. If 90% of commits come from one person who hasn’t pushed in months, that’s flagged. It skims for obvious code red flags: eval calls, minified blobs, sketchy install scripts—because sometimes the problem is hiding in plain sight.<p>All of this feeds into a weighted scoring model. The final Trust Score (0–100) reflects repo health at a glance, with direct penalties for fake-star behaviour, so a pretty README badge can’t hide inorganic hype.<p>I added for the fun of it it generating a cool little badge for the trust score lol.<p>Under the hood, its all uses, heuristics, and a lot of GitHub API paging. Run it on any public repo with:<p>python starguard.py owner/repo --format markdown
It works without a token, but you’ll hit rate limits sooner.<p>Please provide any feedback you can.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43962427">https://news.ycombinator.com/item?id=43962427</a></p>
<p>Points: 122</p>
<p># Comments: 72</p>
]]></description><pubDate>Mon, 12 May 2025 12:59:19 +0000</pubDate><link>https://github.com/m-ahmed-elbeskeri/Starguard</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43962427</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43962427</guid></item><item><title><![CDATA[Scaling AI agent tooling: ideas for dynamic discovery and execution workflows]]></title><description><![CDATA[
<p>I've been working on designs for scaling tool use in AI agents and wanted to share some ideas for feedback.<p>The problem is familiar: as agents get access to more tools, either you overload the system with static tool lists, or you hard-code the tools at design time. Both approaches hit scaling limits quickly.<p>What I’ve been exploring is letting agents discover and use tools dynamically. Instead of loading every tool into the system upfront, the agent could explore an external registry at runtime, inspect tool descriptions and parameters, and decide what to use based on its current goal.<p>There are a few workflows I’m testing:<p>Manual exploration: agent lists tools, inspects them, and selects what to use.<p>Fuzzy auto-selection: agent describes its goal, and the system suggests a matching tool.<p>External LLM-assisted selection: agent outsources tool selection to another agent or service, which queries the registry and recommends a tool.<p>Each of these has trade-offs. Manual exploration is transparent but slow. Fuzzy matching is faster but depends on query quality. External assistance can scale to complex environments but adds system complexity.<p>The goal is to move away from static APIs and let agents explore tools like a developer browsing an API catalog. Search, inspect, and use tools as needed, rather than loading them all upfront.<p>Open questions I’m still thinking about:<p>Should these workflows be combined, escalating from manual to auto-selection if needed?<p>Should the system suggest parameter defaults or usage examples?<p>Should I move from basic matching to semantic search?<p>Would chaining tools together unlock more complex workflows?<p>I've written up a more detailed research note here, if anyone's interested:
https://github.com/m-ahmed-elbeskeri/MCPRegistry/tree/main<p>Would appreciate any feedback or experiences if you've explored similar patterns.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43609356">https://news.ycombinator.com/item?id=43609356</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 07 Apr 2025 09:05:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=43609356</link><dc:creator>artski</dc:creator><comments>https://news.ycombinator.com/item?id=43609356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43609356</guid></item></channel></rss>