<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: JanSchu</title><link>https://news.ycombinator.com/user?id=JanSchu</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 18 Apr 2026 12:50:35 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=JanSchu" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>My personal opinion is that it will be extremely difficult in the future to monetize plain software. Either you need a very strong edge and unique angle for your distribution or you have to build a product that cannot be reproduced by agents that can build software.<p>This will be 2 types of products.<p>1. A product which requires tech that is not inside the training data distribution of the models underlying coding agents. This is usually then very cutting edge.<p>2. A product that uses data/insights to generate value for a customer to which a coding agent has no access.<p>These are the only abstract moats I can think of, the rest will be a race to the bottom</p>
]]></description><pubDate>Mon, 13 Apr 2026 20:01:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47757107</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47757107</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47757107</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>I'll add to the implementation list</p>
]]></description><pubDate>Mon, 13 Apr 2026 14:36:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47752638</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47752638</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47752638</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>Each platform has an api that you can use to post. You just have to setup a developer account for each platform</p>
]]></description><pubDate>Mon, 13 Apr 2026 12:32:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=47751062</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47751062</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47751062</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>What I did was to break the development into different layers which had to be completed after another, since the functionalities build on each other. Each layer had independent work streams which run in parallel. Each work stream was one independent worktree/session in Claude code<p>First I triggered all work streams per layer and brought them to a level of completion I was happy with. Then you merge one after another (challenge in github with the @codex the implementation and rebases when you move to the next work stream.<p>This is roughly how it looked like:<p>Layer 0 - Project Scaffolding<p>Layer 1 — Core Features
Stream A — Content Pipeline
Stream B — Social Platform Providers
Stream C — Media Library
Stream D — Notification System
Stream E — Settings UI<p><pre><code>                        T-0.1 (Scaffolding)
                              │
                        T-0.2 (Core Models + Auth)
                              │
          ┌───────────────────┼───────────────────┬──────────────┐
          │                   │                   │              │
     Stream A            Stream B            Stream C       Stream D
     (Content)           (Providers)         (Media)        (Notifs)
          │                   │                   │              │
     T-1A.1 Composer    T-1B.1 FB/IG/LI    T-1C.1 Library  T-1D.1 Engine
          │              T-1B.2 Others           │              │
     T-1A.2 Calendar         │                   │         Stream E
          │                  │                   │         T-1E.1 Settings UI
     T-1A.3 Publisher ◄──────┘                   │
          │                                      │
          └──────────◄───────────────────────────┘
          (Publisher needs providers + media processing)

</code></pre>
Layer 2 — Collaboration & Engagement
Stream F — Approval & Client Portal
Stream G — Inbox
Stream H — Calendar & Composer Enhancements
Stream I — Client Onboarding<p><pre><code>          Layer 1 complete
                │
    ┌───────────┼───────────┬──────────────┐
    │           │           │              │
 Stream F   Stream G    Stream H       Stream I
 (Approval  (Inbox)     (Calendar+     (Onboarding)
  + Portal)              Composer
    │                    enhance)
 T-2F.1 Approval
    │
 T-2F.2 Portal
</code></pre>
Thus I did run up to 4 agents in parallel, but o be honest this is the max level of parallelism my brain was able to handle, I really felt like the bottleneck here.<p>Additionally, your token usage is very high since you are having so many agent do work at the same time, hence I very often reached my claude session token limits and had to wait for the next session to begin (I do have the 5x Max plan)</p>
]]></description><pubDate>Mon, 13 Apr 2026 12:15:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750900</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47750900</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750900</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>yeah htmx is from 2020, it feels like yesterday</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:56:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750735</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47750735</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750735</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>yes</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:54:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750723</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47750723</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750723</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>The spec document was also written by Claude (over many iteration) and lots of manual additions. It took me tho 4 full days to get the specs to the level I was happy with.<p>One main thing I did was to use the deep research feature of Claude to get a good understanding of what other tools are offering (features, integrations etc.)<p>Then each feature in the specs document got refined  with manual suggestions and screenshots of other tools that I took.</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:54:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750721</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47750721</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750721</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>I did not include it yet, because you have to pay for the API. They changed their pricing model recently to pay only per request. I'll be looking into it the next weeks</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:23:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750491</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47750491</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750491</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>I do have originally a data science background, thus python is usually my go to language, and have a lot of experience with django already. This helps a lot when reviewing AI code and if you have to judge architecture, etc.<p>And for hmtx I simply wanted to have something lightweight that is not very invasive to keep things simple and dependencies low.<p>In my head this was a good consideration to keep complexity low for my AI agents :-)</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:20:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750464</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47750464</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750464</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>Postgres is simply a battle proven technology.</p>
]]></description><pubDate>Mon, 13 Apr 2026 11:15:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47750428</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47750428</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47750428</guid></item><item><title><![CDATA[New comment by JanSchu in "Show HN: I built a social media management tool in 3 weeks with Claude and Codex"]]></title><description><![CDATA[
<p>I wanted to test how far AI coding tools could take a production project. Not a prototype. A social media management platform with 12 first-party API integrations, multi-tenant auth, encrypted credential storage, background job processing, approval workflows, and a unified inbox. The scope would normally keep a solo developer busy for the better part of a year. I shipped it in 3 weeks.<p>Before writing any code, I spent time on detailed specs, an architecture doc, and a style guide. All public: <a href="https://github.com/brightbeanxyz/brightbean-studio/tree/main/development_specs" rel="nofollow">https://github.com/brightbeanxyz/brightbean-studio/tree/main...</a><p>I broke the specs into tasks that could run in parallel across multiple agents versus tasks with dependencies that had to merge first. This planning step was the whole game. Without it, the agents produce a mess.<p>I used Opus 4.6 (Claude Code) for planning and building the first pass of backend and UI. Opus holds large context better and makes architectural decisions across files more reliably. Then I used Codex 5.3 to challenge every implementation, surface security issues, and catch bugs. Token spend was roughly even between the two.<p>Where AI coding worked well: Django models, views, serializers, standard CRUD. Provider modules for well-documented APIs like Facebook and LinkedIn. Tailwind layouts and HTMX interactions. Test generation. Cross-file refactoring, where Opus was particularly good at cascading changes across models, views, and templates when I restructured the permission system.<p>Where it fell apart: TikTok's Content Posting API has poor docs and an unusual two-step upload flow. Both tools generated wrong code confidently, over and over. Multi-tenant permission logic produced code that worked for a single workspace but leaked data across tenants in multi-workspace setups. These bugs passed tests, which is what made them dangerous. OAuth edge cases like token refresh, revoked permissions, and platform-specific error codes all needed manual work. Happy path was fine, defensive code was not. Background task orchestration (retry logic, rate-limit backoff, error handling) also required writing by hand.<p>One thing I underestimated: Without dedicated UI designs, getting a consistent UX was brutal. All the functionality was there, but screens were unintuitive and some flows weren't reachable through the UI at all. 80% of features worked in 20% of the time. The remaining 80% went to polish and making the experience actually usable.<p>The project is open source under AGPL-3.0. 12 platform integrations, all first-party APIs. Django 5.x + HTMX + Alpine.js + Tailwind CSS 4 + PostgreSQL. No Redis. Docker Compose deploy, 4 containers.<p>Ask me anything about the spec-driven approach, platform API quirks, or how I split work between the two models.</p>
]]></description><pubDate>Mon, 13 Apr 2026 09:28:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47749688</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47749688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749688</guid></item><item><title><![CDATA[Show HN: I built a social media management tool in 3 weeks with Claude and Codex]]></title><description><![CDATA[
<p>Article URL: <a href="https://github.com/brightbeanxyz/brightbean-studio">https://github.com/brightbeanxyz/brightbean-studio</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47749674">https://news.ycombinator.com/item?id=47749674</a></p>
<p>Points: 188</p>
<p># Comments: 131</p>
]]></description><pubDate>Mon, 13 Apr 2026 09:26:56 +0000</pubDate><link>https://github.com/brightbeanxyz/brightbean-studio</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47749674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47749674</guid></item><item><title><![CDATA[New comment by JanSchu in "[dead]"]]></title><description><![CDATA[
<p>Author here. We run an MCP server ourselves (BrightBean, YouTube intelligence API), so the security side of this protocol is something we think about daily. The breach timeline from April through October 2025 is what convinced us to write this up. Happy to go deeper on any of the CVEs or the remediation side.</p>
]]></description><pubDate>Thu, 19 Mar 2026 12:41:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47438378</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47438378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47438378</guid></item><item><title><![CDATA[YouTube's Reimagine Lets Anyone Turn a Short into an AI Video]]></title><description><![CDATA[
<p>Article URL: <a href="https://brightbean.xyz/blog/youtube-reimagine-ai-remix-shorts-veo-gemini/">https://brightbean.xyz/blog/youtube-reimagine-ai-remix-shorts-veo-gemini/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47436981">https://news.ycombinator.com/item?id=47436981</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 19 Mar 2026 09:56:29 +0000</pubDate><link>https://brightbean.xyz/blog/youtube-reimagine-ai-remix-shorts-veo-gemini/</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47436981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47436981</guid></item><item><title><![CDATA[Anthropic's code execution pattern for MCP cuts agent token usage from 150K – 2K]]></title><description><![CDATA[
<p>Article URL: <a href="https://brightbean.xyz/blog/code-execution-mcp-efficient-ai-agents/">https://brightbean.xyz/blog/code-execution-mcp-efficient-ai-agents/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47430925">https://news.ycombinator.com/item?id=47430925</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 18 Mar 2026 20:20:37 +0000</pubDate><link>https://brightbean.xyz/blog/code-execution-mcp-efficient-ai-agents/</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=47430925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47430925</guid></item><item><title><![CDATA[Ask HN: What is your CPM Target when working with Influencers?]]></title><description><![CDATA[
<p>We are in the process of rolling out our first influencer campaigns (saas finance niche).
However, the prices that are being asked of us are simply mind boggling. For a normal in-post feed on Instagram the influencers that we have contacted, ask for prices that convert into a CPM between $35 to $75.<p>I have to admit that we do not know yet the sales results, but such high CPMs make my head spin from a performance marketing perspective.<p>Do you have similar experiences? Since for such high CPMs, the conversion rates have to be extremely good to justify the spend.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44107910">https://news.ycombinator.com/item?id=44107910</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 27 May 2025 15:30:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=44107910</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=44107910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44107910</guid></item><item><title><![CDATA[New comment by JanSchu in "The LLM Meta-Leaderboard averaged across the 28 best benchmarks"]]></title><description><![CDATA[
<p>We’re solidly in a three‑horse race at the top: Gemini 2.5 Pro, OpenAI o‑series, Anthropic Claude 3.7+.<p>The gap between #1 and #2 is slim, so pricing, latency, and policy alignment should weigh more heavily than a couple of benchmark points.<p>Specialists matter: if your stack leans on long‑context RAG, o3‑high may edge out; for multilingual safety‑critical chat, Claude might still be your best pick.</p>
]]></description><pubDate>Mon, 05 May 2025 11:48:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=43894019</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=43894019</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43894019</guid></item><item><title><![CDATA[New comment by JanSchu in "Cursor raises $900M at $9B valuation"]]></title><description><![CDATA[
<p>$900 million on a $9 billion post‑money means investors are paying ~45× trailing ARR if the FT’s “about $200 million annual run‑rate” number is right. --> <a href="https://www.ft.com/content/a7b34d53-a844-4e69-a55c-b9dee9a97dd2?utm_source=chatgpt.com" rel="nofollow">https://www.ft.com/content/a7b34d53-a844-4e69-a55c-b9dee9a97...</a><p>That multiple only makes sense if you believe two things at once:<p>Cursor keeps compounding like GitHub itself.
Right now the product is an Electron wrapper around VS Code plus a very slick Copilot‑style agent. It’s winning dev‑to‑dev word‑of‑mouth, but most of the heavy IP (the frontier model) still lives at OpenAI. Cursor’s moat has to be distribution + workflow lock‑in, otherwise every IDE extension store is a free market.<p>AAC (AI‑assisted coding) is still early‑days, not a feature.
The bullish view is that we’re going from “autocomplete that writes a function” to “agent that forks a branch, edits five files, and opens a PR.” If that happens, the IDE vendor that owns the agents could take a tax on all software creation—$9B looks cheap in that scenario.<p>Skeptical takes:<p>Switching cost is low. Developers live in tabbed editors; the moment VS Code ships “Copilot Agent” with equivalent quality, the convenience advantage evaporates.<p>Model margins flow upstream. OpenAI (already on the cap table) can keep more of the unit‑economics by bundling a first‑party agent, leaving Cursor to chase seat growth while gross margins compress.<p>FOMO capital cycle. We’re at the part of the hype curve where Tiger 2.0 funds can’t buy equity in OpenAI/Anthropic but still need AI exposure on the balance sheet, so application‑layer plays clear at eye‑watering marks.<p>The part I do find compelling is speed: two years from MIT dorm to ~$200 M ARR is wild. If Cursor can convert that velocity into genuine platform gravity—plugins, team workflows, per‑repo context that doesn’t travel well—then maybe the bet pencils out. Otherwise it’s an expensive option on a feature the incumbents haven’t finished shipping yet.</p>
]]></description><pubDate>Mon, 05 May 2025 11:43:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43893965</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=43893965</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43893965</guid></item><item><title><![CDATA[New comment by JanSchu in "Why can't HTML alone do includes?"]]></title><description><![CDATA[
<p>Why can’t I just write <include src="header.html"> and be done with it?<p>We almost could. Chrome shipped a draft of HTML Imports back in 2014. You’d do exactly that, the browser would fetch the fragment, parse it, and make it available for insertion. The idea died for three reasons that still apply today:<p>Execution‑order and performance hazards.
Images, scripts, and styles are fire‑and‑forget: the preload scanner sees a URL, starts the fetch, and the parser keeps streaming. With HTML fragments you need the full subtree before you can finish parsing the parent document (otherwise IDs, custom‑element upgrades, <script defer>, etc. fire in the wrong order). That either stalls the parser—horrible for TTFB—or forces async insertion, which produces layout shifts. Everyone hated both outcomes.<p>Security and isolation.
If an imported fragment can run scripts it becomes an XSS foot‑gun; if it can’t run scripts it breaks a surprising amount of markup (think onerror, custom elements with module scripts, CSP inheritance, etc.). The platform already has an “HTML that can’t run scripts” container: it’s called an iframe. Anything more permissive lands in a swamp of half‑trusted execution.<p>The “circular dependency” tar‑pit.
Templates inherit CSS scopes, custom element registries, and base URLs from the document that instantiates them. Once you let HTML pull in more HTML, those scopes can nest arbitrarily—and can link back to parents. The HTML spec team tried to spec out the edge‑cases and basically threw up their hands. (There’s a famous TAG thread titled “HTML Imports considered harmful” that reads like war diaries.)<p>Meanwhile developers solved the “shared header” problem higher up the stack—SSI, PHP include, SSG partials, React components, you name it—so browser vendors didn’t see a payoff big enough to justify the complexity. The attitude became: “composition is a build‑time concern, not a runtime primitive.”<p>Could it ever come back? Maybe, but the bar is higher now that everyone has a build step. A proposal would need to:<p>Stream (no parser‑blocking)<p>Sandbox (no ambient script execution)<p>Deduplicate (avoid circular fetch hell)<p>Play nicely with CSP, SRI, origin isolation, and the module graph<p>That starts to look a lot like… <iframe src="header.html" loading="eager">, which we already have—just not the ergonomic sugar we wish for.<p>So the short answer is: HTML includes are easy in user‑land but devilishly hard to make safe, fast, and spec‑compliant in the browser itself.</p>
]]></description><pubDate>Mon, 05 May 2025 11:40:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43893941</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=43893941</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43893941</guid></item><item><title><![CDATA[New comment by JanSchu in "Semantic unit testing: test code without executing it"]]></title><description><![CDATA[
<p>Interesting experiment. I like that you framed it as “tests that read the docs” rather than “AI will magically find bugs”, because the former is exactly where LLMs shine: cross‑checking natural‑language intent with code.<p>A couple of thoughts after playing with a similar idea in private repos:<p>Token pressure is the real ceiling.
Even moderately sized modules explode past 32k tokens once you inline dependencies and long docstrings. Chunking by call‑graph depth helps, but at some point you need aggressive summarization or cropping, otherwise you burn GPU time on boilerplate.<p>False confidence is worse than no test.
LLMs love to pass your suite when the code and docstring are both wrong in the same way. I mitigated this by flipping the prompt: ask the model to propose three subtle, realistic bugs first, then check the implementation for each. The adversarial stance lowered the “looks good to me” rate.<p>Structured outputs let you fuse with traditional tests.
If the model says passed: false, emit a property‑based test via Hypothesis that tries to hit the reasoning path it complained about. That way a human can reproduce the failure locally without a model in the loop.<p>Security review angle.
LLM can spot obvious injection risks or unsafe eval calls even before SAST kicks in. Semantic tests that flag any use of exec, subprocess, or bare SQL are surprisingly helpful.<p>CI ergonomics.
Running suite on pull requests only for files that changed keeps latency and costs sane. We cache model responses keyed by file hash so re‑runs are basically free.<p>Overall I would not drop my pytest corpus, but I would keep an async “semantic diff” bot around to yell when a quick refactor drifts away from the docstring. That feels like the sweet spot today.<p>P.S. If you want a local setup, Mistral‑7B‑Instruct via Ollama is plenty smart for doc/code mismatch checks and fits on a MacBook</p>
]]></description><pubDate>Mon, 05 May 2025 11:37:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43893910</link><dc:creator>JanSchu</dc:creator><comments>https://news.ycombinator.com/item?id=43893910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43893910</guid></item></channel></rss>