<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: weitendorf</title><link>https://news.ycombinator.com/user?id=weitendorf</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 11:34:43 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=weitendorf" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by weitendorf in "My adventure in designing API keys"]]></title><description><![CDATA[
<p>Hey OP, sorry for the negativity, I think most of these commenters right now are pretty off-base. My company is building a lot of API infrastructure and I thought this was a great write up!</p>
]]></description><pubDate>Wed, 15 Apr 2026 07:49:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=47775920</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47775920</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47775920</guid></item><item><title><![CDATA[New comment by weitendorf in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>Hey, I've been getting into visual processing lately and we just started working on an offline wrapper for Apple's vision/other ML libraries via CLI: <a href="https://github.com/accretional/macos-vision" rel="nofollow">https://github.com/accretional/macos-vision</a>. You can see some SVG art I created in a screenshot I just posted for a different comment <a href="https://i.imgur.com/OEMPJA8.png" rel="nofollow">https://i.imgur.com/OEMPJA8.png</a> (on the right is a cubist plato svg lol)<p>Since your app is fully offline I'd love to chat about photogenesis/your general work in this area since there may be a good opportunity for collaboration. I've been working on some image stuff and want to build a local desktop/web application, here are some UI mockups of that I've been playing with (many AI generated though some of the features are functional, I realized that with CSS/SVG masks you can do a ton more than you'd expect): <a href="https://i.imgur.com/SFOX4wB.png" rel="nofollow">https://i.imgur.com/SFOX4wB.png</a> <a href="https://i.imgur.com/sPKRRTx.png" rel="nofollow">https://i.imgur.com/sPKRRTx.png</a> but we don't have all the ui/vision expertise we'd need to take them to completion most likely.</p>
]]></description><pubDate>Sun, 12 Apr 2026 21:39:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47744834</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47744834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47744834</guid></item><item><title><![CDATA[New comment by weitendorf in "Bring Back Idiomatic Design (2023)"]]></title><description><![CDATA[
<p>Guys, I found out about this technology called Cascading Style Sheets recently and I think it's the missing piece we've been looking for. It lets you declaratively specify layout in a composable, hierarchical system based on something called the Document Object Model in a way that minimizes both clientside <i>and</i> serverside processing, based on these things called "stylesheets".<p>The best part is, it's super easy to customize them, read others for inspiration or to see how they did something, or even ship multiple per site to deal with different user preferences. Through this "forms" api, and little-known browser features like url-fragments, target/attribute selector, and style combinators, plus "the checkbox hack" you can build extremely responsive UIs out of it by "cascading" UI updates through your site! When do you think they're going to add it to next.js?<p>I'm tentatively calling this new UI paradigm "no-framework" or "no package manager", not sure yet <a href="https://i.imgur.com/OEMPJA8.png" rel="nofollow">https://i.imgur.com/OEMPJA8.png</a></p>
]]></description><pubDate>Sun, 12 Apr 2026 19:52:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47743727</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47743727</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47743727</guid></item><item><title><![CDATA[New comment by weitendorf in "Your File System Is Already A Graph Database"]]></title><description><![CDATA[
<p>Understood. I guess I'm saying "soon" but definitely agreed its not "now" yet. I will say though, with 96GB, in a couple months you're going to be able to hold tons of Gemma 4 LoRa "specialists" in-memory at the same time and I really think it will feel like a whole new world once these are all getting trained and shared and adapted en-masse. And also, you <i>could</i> set up personal traces now if you want. Nobody can make you, but in its laziest form it can be literally just taking screenshots of your screen periodically as you work, and that'll have applications soon</p>
]]></description><pubDate>Wed, 08 Apr 2026 16:03:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47692069</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47692069</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47692069</guid></item><item><title><![CDATA[New comment by weitendorf in "Your File System Is Already A Graph Database"]]></title><description><![CDATA[
<p>Right, that's exactly the situation I'm in too and "send them to go looking for stuff for you" without it going off the rails is the problem we've been working on.<p>Basically you need a squad of specialized models to do this in a mostly-structured way that ends up looking kind of like a crawling or scraping/search operation. I can share a stack of about 5-6 that are working for us directly if you want, I want to keep the exact stack on the DL for now but you can check my company's recent github activity to get an idea of it. It's basically a "browser agent" where gemma or qwen guide the general navigation/summarization but mostly focus on information extraction and normalization.<p>The other thing I've done, which obviously not everybody is going to want to do, is create emails and browser profiles for the browser agent (since they basically work when I'm not on the computer, but need identity to navigate the web) and run them on devices that don't have the keys to the kingdom. I also give them my phone number and their own (via an endpoint they can only call me from). That way if they run into something they have a way to escalate it, and I can do limited steering out of the loop. Obviously this is way more work than is reasonable for most people right now though so I'm hoping to show people a proper batteries-included setup for it soon.<p>Edit: Based on your other comment, I think maybe what you're really looking for most are "personal traces". Right now that's something we're working on with <a href="https://github.com/accretional/chromerpc" rel="nofollow">https://github.com/accretional/chromerpc</a> (which uses the lower level Chrome Devtools Protocol rather than Puppetteer to basically fully automate web navigation, either through an LLM or prescriptive workflows). It would be very simple to set up automation to take a screenshot and save it locally every Xm or in response to certain events and generate traces for yourself that way, if you want. That alone provides a pretty strong base for a personal dataset.</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:29:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47691577</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47691577</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691577</guid></item><item><title><![CDATA[New comment by weitendorf in "Your File System Is Already A Graph Database"]]></title><description><![CDATA[
<p>> Why does AI need that folder structure? Why not a flat list of files and let the AI agent explore with BM25 / grep, etc.<p>Progressive disclosure, same reason you don't get assaulted with all the information a website has to offer at once, or given a sql console and told to figure it out, and instead see a portion of the information in a way that is supposed to naturally lead you to finding the next and next bits of information you're looking for.<p>> use cases<p>This is essentially just where you're moving the hierarchy/compression, but at least for me these are not very disjoint and separable. I think what I actually want are adaptable LoRa that loosely correspond to these use cases but where a dense discriminator or other system is able to adapt and stay in sync with these too. Also, tool-calling + sql/vector embeddings so that you can actually get good filesystem search without it feeling like work, and let the model filter out the junk.<p>> let the AI calculate this at run time?<p>You still do want to let it do agentic RAG but I think more tools are better. We're using sqlite-vec, generating multimodal and single-mode embeddings, and trying to make everything typed into a walkable graph of entity types, because that makes it much easier to efficiently walk/retrieve the "semantic space" in a way that generalizes. A small local model needs at least enough structure to know these are the X ways available to look for something and they are organized in Y ways, oriented towards Z and A things.<p>Especially on-device, telling them to "just figure it out" is like dropping a toddler or autonomous vehicle into a dark room and telling them to build you a search engine lol. They need some help and also quite literally to be taught what a search engine means for these purposes. Also, if you just let them explore or write things without any kind of grounding in what you need/any kind of positive signals, they're just going to be making a mess on your computer.</p>
]]></description><pubDate>Wed, 08 Apr 2026 15:03:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47691218</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47691218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47691218</guid></item><item><title><![CDATA[New comment by weitendorf in "Your File System Is Already A Graph Database"]]></title><description><![CDATA[
<p>This is exactly what we're working on, is there any application in particular you're interested in the most?<p>> I'm struggling collecting actual data I could use for fine-tuning myself,<p>Journalling or otherwise writing is by far the best way to do this IMO but it doesn't take very much audio to accurately do a voice-clone. The hard thing about journalling is that it can actually be really biased away from the actual "distribution" of you, whether it's more aspirational or emotional or less rigorous/precise with language.<p>What I'm starting to do is save as many of my prompts as possible, because I realized a lot of my professional writing was there and it was actually pretty valuable data (especially paired with outputs and knowledge of what went well and waht didn't) for finetuning on my own workloads. Secondly is assembling/curating a collection of tools and products that I can drop into each new context with LLMs and also use for finetuning them on my own needs. Unlike "knowledge repositories" these both accurately model my actual needs and work and don't require me to do really do anything unnatural.<p>The other thing I'm about to start doing is "natural" in a certain sense but kinda weird, basically recording myself talking to my computer (verbalizing my thoughts more so it can be embedded alongside my actions, which may be much sparser from the computer's perspective) / screen recordings of my session as I work with it. This is something I've had to look into building more specialized tools for, because it creates too much data to save all of it. But basically there are small models, transcoding libraries, and pipelines you can use for audio/temporal/visual segmentation and transcription to compress the data back down into tokens and normal-sized images.<p>This is basically creating a semantic search engine of yourself as you work, kinda weird, but IMO it's just much weirder that your computer can actually talk back and learn about you now. With 96GB you can definitely do it BTW. I successfully finetuned an audio workload on gemma 4 2b yesterday on a 16GB mac mini. With 96GB you could do a lot.<p>> letting LLMs write docs and add them to a "knowledge repository"<p>I think what you actually want them to do is send them to go looking for stuff for you, or actively seeking out "learning" about something like that for their own role/purposes, so they can embed the useful information and better retrieve it when they need it, or produce traces grounded in positive signals (eg having access to this piece of information or tool, or applying this technique or pattern, measurably improves performance at something in-distribution to whatever you have them working on) they can use in fine-tuning themselves.</p>
]]></description><pubDate>Wed, 08 Apr 2026 14:36:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690828</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47690828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690828</guid></item><item><title><![CDATA[New comment by weitendorf in "MegaTrain: Full Precision Training of 100B+ Parameter LLMs on a Single GPU"]]></title><description><![CDATA[
<p>To make the most of these architectures I think the key is essentially moving more of the knowledge/capabilities out of the "weights" and into the complimentary parts of the system in a way that's proportionate to the capabilities of the hardware.<p>In the past couple months there's been a kind of explosion in small-models that are occupying a niche in this kind of AI-transcoding space. What I'm hoping we're right on the cusp of achieving is a similar explosion in what I'd call tool-adaptation, where an LLM paired with some mostly-fixed suite of tools and problem cases can trade off some generality for a specialized (potentially hyper-specialized to the company or user) role.<p>The thing about more transcoding-related tasks is that they in general stay in sync with what the user of the device is actively doing, which will also typically be closely aligned with the capabilities of the user's hardware and what they want to do with their computer. So most people aren't being intentional about this kind of stuff right now, partly out of habit I think, because only just now does it make sense to think of personal computer as "stranded hardware" now that they can be steered/programmed somewhat autonomously.<p>I'm wondering if with the right approach to MoE on local devices (which local llms are heading towards) we could basically amortize the expensive hit from loading weights in and out of VRAM through some kind of extreme batch use case that users still find useful enough to be worth the latency. LoRa is already really useful for this but obviously sometimes you need more expertise/specialization than just a few layers' difference. Experimenting with this right now. It's the same basic principle as in the paper except less of a technical optimization and more workload optimization. Also it's literally the beginning of machine culture so that's kind of cool</p>
]]></description><pubDate>Wed, 08 Apr 2026 14:06:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47690424</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47690424</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47690424</guid></item><item><title><![CDATA[New comment by weitendorf in "Show HN: Gemma 4 Multimodal Fine-Tuner for Apple Silicon"]]></title><description><![CDATA[
<p>Excellent work still, your repo is much more robust and fleshed out and I am just beelining straight to audio LoRa not really knowing what I'm doing, as this is my first time attempting a ~real ML training project.<p>I think in <a href="https://github.com/mattmireles/gemma-tuner-multimodal/blob/main/gemma_tuner/models/gemma/gemma4_patches.py" rel="nofollow">https://github.com/mattmireles/gemma-tuner-multimodal/blob/m...</a> and <a href="https://github.com/mattmireles/gemma-tuner-multimodal/blob/main/README/guides/apple-silicon/gemma4-guide.md" rel="nofollow">https://github.com/mattmireles/gemma-tuner-multimodal/blob/m...</a> and <a href="https://github.com/mattmireles/gemma-tuner-multimodal/blob/main/README/guides/apple-silicon/LoRA-Apple-Silicon-Guide.md" rel="nofollow">https://github.com/mattmireles/gemma-tuner-multimodal/blob/m...</a> you have a superset of the various cludges I have in my finetuning repo, I'm going to study this and do what I can to learn from it. Really appreciate you sharing it here!<p>Definitely interested in swapping notes if you are though. Probably the biggest thing that came out of this exercise for us was realizing that Apple actually has some really powerful local inference/data processing tools available locally, they just are much more marketed towards application developers so a lot of them fly under the radar.<p>We just published  <a href="https://github.com/accretional/macos-vision" rel="nofollow">https://github.com/accretional/macos-vision</a> to make it easy for anybody to use Apple's local OCR, image segmentation, foreground-masking, facial analysis, classification, and video tracking functionality accessible via CLI and hopefully more commonly in ML and data workloads. Hopefully you or someone else can get some use of it. I definitely will from yours!</p>
]]></description><pubDate>Wed, 08 Apr 2026 00:30:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47683153</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47683153</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47683153</guid></item><item><title><![CDATA[New comment by weitendorf in "Show HN: Gemma 4 Multimodal Fine-Tuner for Apple Silicon"]]></title><description><![CDATA[
<p>Hey I was literally just working on this today (I was racing ahead on an audio FT myself but OP beat me by a few hours). For audio inference definitely try running your input through VAD first to drop junk data and if necessary, as one of several preprocessing steps before sending the audio to the large model. You can check out how I did it here: <a href="https://github.com/accretional/vad/blob/main/pkg/vad/vad.go" rel="nofollow">https://github.com/accretional/vad/blob/main/pkg/vad/vad.go</a><p>I was using <a href="https://huggingface.co/onnx-community/pyannote-segmentation-3.0" rel="nofollow">https://huggingface.co/onnx-community/pyannote-segmentation-...</a> because with ONNX, I could run it on Intel servers with vectorized instructions, locally on my Mac, AND in-browser with transformers.js<p>VAD is absurdly time-effective (I think like O(10s) to segment 1hr of audio or something) and reduces the false positive rate/cost of transcription and multimodal inference since you can just pass small bits of segmented audio into another model specializing in that, then encode it as text before passing it to the expensive model.</p>
]]></description><pubDate>Wed, 08 Apr 2026 00:01:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47682913</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47682913</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47682913</guid></item><item><title><![CDATA[New comment by weitendorf in "One ant for $220: The new frontier of wildlife trafficking"]]></title><description><![CDATA[
<p>> they seemed to be selling them legally<p>I think realistically businesses in other parts of the world have no incentive to fully enforce ethical provenance across the entire supply chain for these kinds of products, and in most cases, fully lack the capability either. You'd have to run some kind of ATF-kinda thing in a third world country where official rule of law is already dicey or absent.</p>
]]></description><pubDate>Mon, 06 Apr 2026 17:22:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47663964</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47663964</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47663964</guid></item><item><title><![CDATA[New comment by weitendorf in "I won't download your app. The web version is a-ok"]]></title><description><![CDATA[
<p>They are so paranoid against scraping or someone building automations on top of their app they don't want you to have, that they are willing to make their actual application borderline unusable for the power users who would actually be willing to pay for their first party upsells and features.<p>It's infuriating. I have literally tried all of their paid products in various forms (they are expensive but the value is clearly there if you're a business). If only they invested as much in making them actually good as they did in preventing you from using extensions or other tools to implement the features they can't or won't, I'm sure they'd get a lot more business.</p>
]]></description><pubDate>Mon, 06 Apr 2026 17:16:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47663847</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47663847</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47663847</guid></item><item><title><![CDATA[New comment by weitendorf in "France pulls last gold held in US for $15B gain"]]></title><description><![CDATA[
<p>That was also a last-ditch effort to maintain pre-WW2 geopolitical structures rather than a bipolar US-sphere vs Soviet-sphere world. Note that this was basically the nail in the coffin that led to their full-fledged decolonization in the following years. At the time the UK still held very significant military and political sway over the middle east, east africa, and asia<p><a href="https://en.wikipedia.org/wiki/British_Empire#/media/File:British_Decolonisation_in_Africa.png" rel="nofollow">https://en.wikipedia.org/wiki/British_Empire#/media/File:Bri...</a></p>
]]></description><pubDate>Mon, 06 Apr 2026 17:09:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47663734</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47663734</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47663734</guid></item><item><title><![CDATA[New comment by weitendorf in "I won't download your app. The web version is a-ok"]]></title><description><![CDATA[
<p>I agree with this a lot tbh. I think we need to have better support for tiling or something iframe-like in web interfaces. Probably for deep research or focused work, we need something more tree-shaped than the flat tabs-with-back-button structure web browsers expose.</p>
]]></description><pubDate>Mon, 06 Apr 2026 16:56:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47663535</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47663535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47663535</guid></item><item><title><![CDATA[New comment by weitendorf in "Microsoft hasn't had a coherent GUI strategy since Petzold"]]></title><description><![CDATA[
<p>Over the past year I've started thinking a lot more about design and UI work, and I think it's basically impossible to design things, or create design systems, that appeal broadly to different types of users in a cross-platform way.<p>I personally love dense UIs and have no expectation of doing certain kinds of work on a phone or low-powered device like a chromebook, phone, or bottom-barrel laptop. But if you're a company trying to sell products to a broad user base, you want to try to design in a way that works for those kinds of users because they still might be end-users of your product. And there's a good chance that those platforms may be where someone first evaluates your product (eg from a link shared and accessed on a mobile device) even for the users who do plan on using more powerful desktop devices to do their work.<p>So instead we get these information poor, incoherent (because it turns out proper cross-platform, cross-user design is much more difficult than just getting something that works cross-platform for all users on its surface) interfaces. I guess I'm writing this just to add, web/mobile have complicated things partially because more than just requiring their own distinct patterns, they each represent a distinct medium that products try to target with the same kind of design. But because they're different mediums, it's like trying to square a circle.</p>
]]></description><pubDate>Mon, 06 Apr 2026 16:52:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47663463</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47663463</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47663463</guid></item><item><title><![CDATA[New comment by weitendorf in "The CMS is dead, long live the CMS"]]></title><description><![CDATA[
<p>Agreed, the type of person who can say "I'll just build my own CMS" is not usually the type of person spending a significant portion of their job time using a CMS.<p>And you might say, well if they're somewhat technical (which is much more likely, think about eg technical writers or product managers or marketing teams) they can use AI to add more features. But when you actually have something at stake security-wise, that means you need to either put them on rails with something much more prescriptive (a "trad-CMS" lmao) or spend a bunch of time reviewing/fixing their code (which, since they're not the same kind of person as you, may not even be something they have any interest in doing, and kind of just gets in the way of them getting their message out on your site as intended).<p>That said, I think most tech companies will still roll their own internal tools to do this rather than buy it off the shelf, just because buying it through a vendor and fully setting up in a way that's secure and integrated with your business processes involves more work than rolling it yourself, and has a lot of ways it can go wrong.<p>IMO what you really want is some kind of FOSS CMS that works really well off-the-shelf for a small team, and has a strong ecosystem of integrations to add on SSO and visual editors and stuff like that as you grow, where you can also probably just hire someone to do that part since that would probably coincide with your business getting too busy for spending your time on an internal CMS to be the most effective use of your time. Which is literally wordpress.<p>It's just that wordpress is a death-by-a-thousand-cuts of mediocre quality/over-complicated stuff, and the core technology has some bad abstractions/shows its age, and that emanates out into everything else it touches. Also, while it's true that a static site is much better for most people, SOMEONE has to actually run a web-server for those files, and that does actually cost money to provide, so I've softened my thoughts on Wordpress doing that. It's not actually free for Cloudflare to do that for you, it's just a loss leader they can afford to give away because they have economies of scale and privileged access to the Internet.</p>
]]></description><pubDate>Sat, 04 Apr 2026 20:10:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642873</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47642873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642873</guid></item><item><title><![CDATA[New comment by weitendorf in "The CMS is dead, long live the CMS"]]></title><description><![CDATA[
<p>Yes, I completely agree. The thing is, this kind of customer just doesn't want to bother themselves with the technical details, and has no frame of reference to understand or even care why Wordpress isn't actually a good fit for hosting their site.<p>They also usually don't want to self-serve. IMO this became abundantly clear once I saw who was using bolt.new and Lovable and what was being built. You'd think these would be perfect fits for non-technical business owners, but after talking to them more it turns out they just don't have the time or interest to spend hours on building some little marketing site, and want it to be someone else's <i>responsibility</i>. Conversely, I would never build something with Framer and have no interesting in allowing some fly-by-night agency hold my site hostage, but they do a lot better at actually delivering value to end users without making them spend their time on tech stuff they don't care about.<p>Conversely, the kind of person spending hours building a site on Lovable for some SaaS product nobody will ever use has an abundance of time and doesn't really want to pay for anything. Most of the time they won't even put their own name on the site lol. You just don't want to deal with that kind of person IMO. Cloudflare and Github allow it because there's a small chance that a small portion of that kind of person ends up actually making something valuable, and because they have a different cost structure due to their affiliations with massive infrastructure holders.<p>I got very, very close to launching a vertical static site hosting product a few months ago but eventually realized this was kind of a market for lemons. Our own site is on a Lovable-like platform we built that uses our own svelte-baesd FOSS static site generator called Statue. But in using it to try to make some visualization on our own site, and vibe-debug stuff like a non-technical customer would (this thing on this page is broken in this way) I realized that this wouldn't actually feel like magic to someone who values their time, or isn't getting paid a salary to be a web developer and doesn't understand/care that it's still quite labor-intensive to do this.<p>IMO the real money is in actually being willing to take accountability/responsibility for building someone's site, and building real tooling around it that works for non-developers AND developers, which is what we're building towards now. It's historically been treated as a kind of low-prestige/uninteresting/unscalable business doing agency web stuff, but if you can figure out how to make it scalable and give people beautiful websites, and not make people who value their time wade through slop, there's immense opportunity.</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:31:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642480</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47642480</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642480</guid></item><item><title><![CDATA[New comment by weitendorf in "The CMS is dead, long live the CMS"]]></title><description><![CDATA[
<p>I built the same thing and then just realized that I built a marketing funnel for Cloudflare lol. It's why Cloudflare is trying a bunch of different approaches to the same thing, they're the only ones that actually benefit from it because you can't actually build a business off hosting millions of sites on cf pages, it's a loss leader for them to convert you to a paid product if you end up one day getting a lot of traffic<p>Hosting a static site isn't free, they just don't charge you for it early on</p>
]]></description><pubDate>Sat, 04 Apr 2026 19:23:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47642400</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47642400</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47642400</guid></item><item><title><![CDATA[New comment by weitendorf in "Embarrassingly simple self-distillation improves code generation"]]></title><description><![CDATA[
<p>It's more complicated than that. Small specialized LLMS are IMO better framed as "talking tools" than generalized intelligence. With that in mind, it's clear why something that can eg look at an image and describe things about it or accurately predict weather, then converse about it, is valuable.<p>There are hardware-based limitations in the size of LLMs you can feasibly train and serve, which imposes a limit in the amount of information you can pack into a single model's weights, and the amount of compute per second you can get out of that model at inference-time.<p>My company has been working on this specifically because even now most researchers don't seem to really understand that this is just as much an economics and <i>knowledge</i> problem (cf Hayek) as it is "intelligence"<p>It is much more efficient to strategically delegate specialized tasks, or ones that require a lot of tokens but not a lot of intelligence, to models that can be served more cheap. This is one of the things that Claude Code does very well. It's also the basis for MOE and some similar architectures with a smarter router model serving as a common base between the experts.</p>
]]></description><pubDate>Sat, 04 Apr 2026 17:46:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=47641386</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47641386</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47641386</guid></item><item><title><![CDATA[New comment by weitendorf in "OpenCode – Open source AI coding agent"]]></title><description><![CDATA[
<p>I built a product solving this problem about a year ago, basically a serverless, container-based, NATed VScode where you can eg "run Claude Code" (or this) in your browser on a remote container.<p>There's a reason I basically stopped marketing it, Cursor took off so much then, and now people are running Claude/Codex locally. First, this is something people only actually start to care about once they've been bitten by it hard enough to remember how much it hurt, and most people haven't got there yet (but it will happen more as the models get better).<p>Also, the people who simultaneously care a lot about security and systems work AND are AI enthusiasts AND generally highly capable are potentially building in the space, but not really customers. The people who care a lot about security and systems work aren't generally decision makers or enthusiastic adopters of AI products (only just now are they starting to do so) and the people who are super enthusiastic about AI generally aren't interested in spending a lot of time on security stuff. To the extent they do care about security, they want it to Just Work and let them keep building super fast. The people who are decision makers but less on the security/AI trains need to this happen more, and hear about the problem from other executives, before they're willing to spend on it.<p>To the extent most people actualy care about this, they still want to Just Work like they do now and either keep building super fast or not thinking about AI at all. It's actually extremely difficult to give granular access to agents because the entire point is them acting autonomously or keeping you in a flow state. You either need to have a really compatible threat model to doing so (eg open source work, developer credentials only used for development and kept separate from production/corp/customer data), spend a lot of time setting things up so that agents can work within your constraints (which also requires a willingness to commit serious amounts of time or resources to security, and understanding of it), or spend a lot of time approving things and nannying it.<p>So right now everybody is just saying, fuck it, I trust Anthropic or Microsoft or OpenAI or Cursor enough to just take my chances with them. And people who care about security are of course appalled at the idea of just giving another company full filesystem access and developer credentials in enterprises where the lack of development velocity and high process/overhead culture was actually of load-bearing importance. But really it's just that secure agentic development requires significant upfront investment in changing the way developers work, which nobody is willing to pay for yet, and has no perfect solutions yet. Dev containers were always a good idea and not that much adopted either, btw.<p>It takes a lot more investment in actually providing good permissions/security for agent development environments still too, which even the big companies are still working on. And I am still working on it as well. There's just not that much demand for it, but I think it's close.</p>
]]></description><pubDate>Sat, 21 Mar 2026 12:56:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47466597</link><dc:creator>weitendorf</dc:creator><comments>https://news.ycombinator.com/item?id=47466597</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47466597</guid></item></channel></rss>