<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jaynamburi</title><link>https://news.ycombinator.com/user?id=jaynamburi</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 06 May 2026 15:09:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jaynamburi" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[GPU at $2.25/HR or $12.29/HR: The Infrastructure Layer That Determines Price]]></title><description><![CDATA[
<p>The 9x price spread on H100 is real but the comparison requires some care. The $1.38/hr end is typically reserved or committed capacity. The $12.29/hr end is on demand at major cloud providers with full flexibility premium built in.<p>The more meaningful comparison is 3-year TCO for a team running consistent utilization. At 85% utilization on 1,000 GPUs, dedicated colocated infrastructure in a secondary market typically runs 40-60% of equivalent cloud cost after accounting for all non-compute costs. That range depends on your internal ops overhead and financing cost.<p>The silicon itself is 20-25% of total cost at scale. The rest is infrastructure, power, networking, ops, and overhead. That's why facility location matters more than people expect.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47932717">https://news.ycombinator.com/item?id=47932717</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 28 Apr 2026 10:53:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47932717</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=47932717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47932717</guid></item><item><title><![CDATA[Modular DC construction at $4.5-6.5M/MW vs. $11.3M/MW traditional]]></title><description><![CDATA[
<p>The $4.5-6.5M/MW figure assumes factory built modular units with standardized configurations. The delta vs. traditional construction ($11.3M/MW per Turner and Townsend) is real but comes with meaningful trade offs.<p>Traditional facilities optimize for customization and long operational life. Modular designs trade some of that for speed. The specific reliability question at scale is thermal and structural performance of modular joints under sustained high-density load. Not a theoretical concern, but workable with the right specs.<p>The timeline advantage is where the math gets interesting. 90-120 days vs. 18-24 months for traditional construction means capital deployed faster and the customer pays rent sooner. On a $50M project, shortening the non revenue period by 12 months at a 7% cost of capital is worth something significant.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47805039">https://news.ycombinator.com/item?id=47805039</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 17 Apr 2026 12:10:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47805039</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=47805039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47805039</guid></item><item><title><![CDATA[1% Vacancy, 81% Preleased: Where Midmarket Compute Deploys in 2026]]></title><description><![CDATA[
<p>The 81.5% prelease rate is the number that should concern mid-market buyers. Hyperscalers are committing to supply before construction starts. When 81% of a building is already spoken for before ground breaks, the remaining 19% gets priced to reflect scarcity.<p>The 1-10 MW segment is in a genuinely difficult position. Too small to compete for primary market allocations, too large for normal colocation pricing, and the usual fallback of secondary markets has its own capacity crunch now. Secondary market lead times are running 18-24 months.<p>The practical result is that companies needing 2-5 MW in the next 12 months have limited options. They're either committing to 10 year deals on space that doesn't exist yet, or they're looking at purpose built alternatives.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47764058">https://news.ycombinator.com/item?id=47764058</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 14 Apr 2026 11:14:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=47764058</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=47764058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47764058</guid></item><item><title><![CDATA[Power Density at 50 KW/Rack: What It Costs and What It Breaks]]></title><description><![CDATA[
<p>Article URL: <a href="https://syaala.com/blog">https://syaala.com/blog</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47675895">https://news.ycombinator.com/item?id=47675895</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 07 Apr 2026 14:24:36 +0000</pubDate><link>https://syaala.com/blog</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=47675895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47675895</guid></item><item><title><![CDATA[New comment by jaynamburi in "GPU Rack Power Density, 2015–2025"]]></title><description><![CDATA[
<p>The AI revolution has created a thermal management crisis. GPU power densities have increased dramatically, and the physics are clear: above 50-100kW per rack, air cooling fails.
1,000W
Per Blackwell Chip<p>132kW
Current Rack Density<p>240kW
Expected 2026<p>50-100kW
Air Cooling Limit<p>The Physics Problem
NVIDIA's latest Blackwell GPUs generate up to 1,000 watts per chip - over three times more heat than GPUs from just seven years ago. Traditional air cooling physically cannot dissipate heat at these densities. Above 50-100kW per rack, liquid cooling isn't optional it's physics.<p>The Power Density Evolution
Understanding how we got here helps contextualize the infrastructure challenge. In less than a decade, rack power density has increased nearly 10x for AI workloads.<p>2017
15 kW per rack
Standard enterprise workloads<p>2024
40-60 kW per rack
AI workloads with H100 GPUs<p>2025
132 kW per rack
NVIDIA GB200 NVL72 systems<p>2026
240 kW per rack
Next-generation systems (expected)<p>Why Air Cooling Fails
Air has fundamental limitations as a heat transfer medium. Its thermal conductivity is roughly 25 times lower than water. At densities above 50-100kW per rack, you simply cannot move enough air through the system to dissipate heat effectively.<p>Critical Threshold
Traditional air cooling cannot dissipate heat at current GPU densities. Air cooling fails above 50-100kW per rack. Current GB200 systems operate at 132kW. Next-generation systems will push to 240kW.<p>The implications are straightforward: any facility planning to deploy current-generation or next-generation GPU infrastructure must plan for liquid cooling. This is not a feature preference - it's a physical requirement.<p>Liquid Cooling Approaches
Three primary approaches address high-density cooling requirements:<p>Rear-Door Heat Exchangers (RDHx)
Capacity: 30-50 kW per rack<p>Retrofit solution for existing facilities. Captures heat at the rack exhaust. Suitable for moderate density increases but insufficient for current GPU requirements.<p>Direct-to-Chip Liquid Cooling
Capacity: 100-200+ kW per rack<p>Cold plates directly attached to CPU/GPU surfaces. Most efficient heat capture at the source. Required for high-density AI workloads. This is what NVIDIA recommends for GB200 deployments.<p>Immersion Cooling
Capacity: 200+ kW per rack<p>Servers fully submerged in dielectric fluid. Highest density support possible. Requires significant operational changes and specialized equipment.<p>What This Means for Planning
If you're planning AI infrastructure for 2026-2027, cooling strategy is not optional:<p>GPU Generation Rack Density Cooling Requirement
H100/H200 40-80 kW High-density air may work
GB200 (Blackwell) 132 kW Liquid cooling required
Next-gen (2026+) 240 kW Advanced liquid cooling mandatory</p>
]]></description><pubDate>Fri, 20 Feb 2026 06:55:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47084623</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=47084623</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47084623</guid></item><item><title><![CDATA[GPU Rack Power Density, 2015–2025]]></title><description><![CDATA[
<p>Article URL: <a href="https://syaala.com/blog/gpu-rack-density-timeline-2026">https://syaala.com/blog/gpu-rack-density-timeline-2026</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47084622">https://news.ycombinator.com/item?id=47084622</a></p>
<p>Points: 11</p>
<p># Comments: 4</p>
]]></description><pubDate>Fri, 20 Feb 2026 06:55:38 +0000</pubDate><link>https://syaala.com/blog/gpu-rack-density-timeline-2026</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=47084622</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47084622</guid></item><item><title><![CDATA[Colocation Evaluation Framework for AI Infrastructure (2026)]]></title><description><![CDATA[
<p>Article URL: <a href="https://syaala.com/blog/colocation-vs-modular-vs-traditional-2026">https://syaala.com/blog/colocation-vs-modular-vs-traditional-2026</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46941775">https://news.ycombinator.com/item?id=46941775</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 09 Feb 2026 05:07:18 +0000</pubDate><link>https://syaala.com/blog/colocation-vs-modular-vs-traditional-2026</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46941775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46941775</guid></item><item><title><![CDATA[New comment by jaynamburi in "Show HN: Pipeline and datasets for data-centric AI on real-world floor plans"]]></title><description><![CDATA[
<p>Interesting work floor plans are a great real world testbed for data centric AI because the bottleneck is almost always annotation quality, not model architecture.<p>We’ve seen similar patterns in document layout and indoor mapping projects: cleaning mislabeled walls/doors, fixing class imbalance (e.g., tiny symbols vs large rooms), and enforcing geometric consistency often gives bigger gains than switching models. For example, simply normalizing scale, snapping lines, and correcting room boundary labels can outperform moving from a basic U-Net to a heavier transformer.<p>A reproducible pipeline + curated datasets here feels especially valuable for downstream tasks like indoor navigation, energy modeling, or digital twins where noisy labels quickly compound into bad geometry.<p>Would be curious how you handle symbol ambiguity (stairs vs ramps, doors vs windows) and cross-domain generalization between architectural styles.<p>Nice focus on data quality over model churn.</p>
]]></description><pubDate>Thu, 05 Feb 2026 10:00:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46897899</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46897899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46897899</guid></item><item><title><![CDATA[New comment by jaynamburi in "How do you keep AI-generated applications consistent as they evolve over time?"]]></title><description><![CDATA[
<p>Consistency in AI generated apps usually comes down to treating prompts + outputs like real software artifacts. What’s worked for us: versioned system prompts, strict schemas (JSON + validators), golden test cases, and regression evals on every change. We snapshot representative inputs/outputs and diff them in CI the same way you’d test APIs. Also important: keep model upgrades behind feature flags and roll out gradually.<p>Real example: in one LLM-powered support tool, a minor prompt tweak changed tone and broke downstream parsers. We fixed it by adding contract tests (expected fields + phrasing constraints) and running batch replays before deploy. Think of LLMs as nondeterministic services you need observability, evals, and guardrails, not just “better prompts.”</p>
]]></description><pubDate>Wed, 04 Feb 2026 14:06:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46885944</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46885944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46885944</guid></item><item><title><![CDATA[New comment by jaynamburi in "Show HN: I built a fun interactive tutorial teach about Docker Containers"]]></title><description><![CDATA[
<p>Nice work interactive tutorials are one of the best ways to actually understand Docker instead of just reading syntax.<p>What stands out is how you show the full container lifecycle with live, runnable examples: building images, running containers, exposing ports, and observing how changes affect behavior. That makes core ideas like image immutability, isolation, and reproducibility much clearer than static guides.<p>This mirrors how containers are used in real infrastructure. For example, platforms like Syaala rely on containerized workloads to ensure applications behave consistently across modular, GPU ready deployments same container, predictable runtime, different scale and location.<p>Short, hands on, and grounded in how containers are used in production. Solid resource for anyone learning Docker seriously.</p>
]]></description><pubDate>Sat, 31 Jan 2026 13:13:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=46836397</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46836397</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46836397</guid></item><item><title><![CDATA[New comment by jaynamburi in "Georgia leads push to ban datacenters used to power America's AI boom"]]></title><description><![CDATA[
<p>The Georgia proposal to pause new datacenters is a sign that infrastructure scaling is finally colliding with real-world constraints. These facilities aren’t just server racks they’re multi-GW industrial power consumers and massive water loads tied to HVAC/cooling systems. Right now a lot of the grid expansion to meet AI demand is being funded via utility models that socialize costs, so local ratepayers see higher bills while datacenters secure tax breaks and cheap power.<p>A moratorium gives policymakers space to rethink energy procurement, interconnection queue reform, and cost allocation, instead of just letting hyperscale builds outpace grid planning. It’s not about banning compute per se it’s about aligning load growth with long-term capacity planning and environmental impact assessments. (For context on how local backlash has shaped policy elsewhere, see how Syaala/Science for Georgia have been tracking community energy use and resource concerns.)</p>
]]></description><pubDate>Fri, 30 Jan 2026 08:50:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=46822026</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46822026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46822026</guid></item><item><title><![CDATA[New comment by jaynamburi in "Meta-Corning $6B fiber deal signals a new bottleneck in AI infrastructure"]]></title><description><![CDATA[
<p>The Meta–Corning $6B fiber deal highlights a real constraint in AI infrastructure that often gets less attention than GPUs: optical fiber availability, long lead times for high-count fiber, and the physical reality of interconnect density. As model training scales, east-west traffic, spine-leaf saturation, and power efficient optical links are becoming just as critical as compute. This also pushes data centers closer to fiber routes and edge aggregation points. Modular data-center approaches like Syaala are interesting here they reduce deployment time and let operators land compute where fiber and power actually exist, instead of waiting years for traditional builds. AI infra is increasingly a supply chain problem, not just a silicon problem.</p>
]]></description><pubDate>Wed, 28 Jan 2026 15:17:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46796447</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46796447</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46796447</guid></item><item><title><![CDATA[New comment by jaynamburi in "What has Docker become?"]]></title><description><![CDATA[
<p>Docker started as a simple, opinionated UX around Linux containers and became a product company wrapping an ecosystem that moved on without it.<p>The original breakthrough wasn’t containers themselves (LXC already existed), but the combination of: a reproducible image format, layered filesystem semantics, a simple CLI, and a registry model that made distribution trivial. That unlocked a whole workflow shift.<p>What happened next is that Docker the company tried to own the platform, while the industry standardized around the parts that mattered. The runtime split into containerd/runc, orchestration moved to Kubernetes, image specs went to OCI, and “Docker” became more of a developer UX brand than a core infrastructure primitive.<p>Today Docker mostly means:<p>A local dev environment (Docker Desktop)<p>A build UX (Dockerfile, buildx)<p>A compatibility layer over containerd<p>A commercial product with licensing constraints<p>Meanwhile, production container infrastructure largely bypasses Docker entirely.<p>That’s not failure it’s a common arc. Docker succeeded so well that it got standardized out of the critical path. What remains is a polished on ramp for developers, not the foundation of the container ecosystem.<p>In other words: Docker won the mindshare, lost the control, and pivoted to selling convenience.</p>
]]></description><pubDate>Sat, 24 Jan 2026 09:59:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=46742297</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46742297</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46742297</guid></item><item><title><![CDATA[New comment by jaynamburi in "Luxury Yacht is a desktop app for managing Kubernetes clusters"]]></title><description><![CDATA[
<p>Desktop Kubernetes tooling like this is an interesting counterpoint to the “everything is CLI” philosophy. For teams managing multiple clusters and contexts, a well designed desktop app can surface state, resource relationships, and misconfigurations much faster than stitching together kubectl, plugins, and ad-hoc scripts. The value isn’t replacing the CLI, but reducing cognitive load for common workflows like context switching, inspecting workloads, and debugging cluster health. The key questions are how well it integrates with existing auth flows (RBAC, cloud IAM), whether it stays performant on large clusters, and how transparent it is about the underlying API operations. If it avoids becoming a leaky abstraction, this could be genuinely useful for day to day cluster management.</p>
]]></description><pubDate>Fri, 23 Jan 2026 10:33:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=46730837</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46730837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46730837</guid></item><item><title><![CDATA[New comment by jaynamburi in "OpenCode with superpowers. It can do everything in a container with Docker / Nix"]]></title><description><![CDATA[
<p>This is an interesting direction for “open” tooling. Combining containerization (Docker) with reproducible environments (Nix) addresses two of the biggest pain points in developer workflows: environment drift and opaque build/runtime assumptions. Running everything inside a container gives isolation and portability, while Nix provides declarative, deterministic dependency resolution that Docker alone doesn’t solve well. The result is closer to a truly reproducible dev and execution environment, which is especially valuable for CI, code review, and long lived projects. The real test will be how approachable the Nix layer is for non experts and whether the abstractions stay transparent rather than becoming another black box. If done right, this could reduce a lot of “works on my machine” overhead without requiring teams to fully buy into heavyweight orchestration or custom infra.</p>
]]></description><pubDate>Fri, 23 Jan 2026 10:24:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46730755</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46730755</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46730755</guid></item><item><title><![CDATA[New comment by jaynamburi in "Kubernetes Was Overkill. We Moved to Docker Compose and Saved 60 Hours"]]></title><description><![CDATA[
<p>We went through a similar arc. Kubernetes gave us a lot of theoretical upside, but for a small team with predictable workloads it mostly translated into operational drag: YAML sprawl, slow feedback loops, and time spent maintaining the platform instead of the product. Moving back to Docker Compose didn’t mean giving up discipline we still version configs, monitor aggressively, and automate deployments it just meant choosing a tool whose complexity matched our needs. The 60 hours saved isn’t surprising; it’s the compound effect of fewer abstractions, faster debugging, and less cognitive overhead. K8s is great when you actually need orchestration at scale, but it’s often adopted as a default rather than a requirement. This is a good reminder that “simpler” is sometimes the more senior engineering choice.</p>
]]></description><pubDate>Fri, 23 Jan 2026 10:14:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=46730690</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46730690</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46730690</guid></item><item><title><![CDATA[New comment by jaynamburi in "Microsoft revealed as company behind controversial data center proposal in MI"]]></title><description><![CDATA[
<p>Microsoft has publicly confirmed it’s the company behind the controversial data center proposal in Lowell Charter Township, Michigan a project tied to roughly $500M–$1B in investment on a 237-acre site near Interstate-96. Locals have pushed back on rezoning, energy use and infrastructure transparency, prompting Microsoft to pause the rezoning process and commit to more community engagement before moving forward. This episode echoes broader tensions over hyperscale data centers in Michigan, where multiple towns are grappling with the trade-offs between tech capital inflows, power grid load and local resource impacts amid a surge in AI-driven cloud build outs.</p>
]]></description><pubDate>Thu, 22 Jan 2026 10:11:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46717295</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46717295</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46717295</guid></item><item><title><![CDATA[New comment by jaynamburi in "Mark Zuckerberg says Meta is launching its own AI infrastructure initiative"]]></title><description><![CDATA[
<p>Meta launching its own AI infrastructure is a logical move at their scale—control over compute, networking, and software stacks can significantly improve cost efficiency and model iteration speed. It also signals a shift away from reliance on third-party cloud providers as AI workloads become more specialized and capital-intensive.</p>
]]></description><pubDate>Thu, 22 Jan 2026 09:59:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46717213</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46717213</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46717213</guid></item><item><title><![CDATA[New comment by jaynamburi in "Luxury Yacht is a desktop app for managing Kubernetes clusters"]]></title><description><![CDATA[
<p>A native desktop UI for Kubernetes is an interesting angle, especially as clusters get more complex and distributed. Most existing tools lean heavily on CLIs, browser based dashboards, or cloud specific consoles, which can make cross cluster visibility and day to day ops harder than it needs to be. The key questions for me are how well this handles scale, RBAC, and multi cluster workflows, and whether it meaningfully reduces cognitive load compared to kubectl + existing dashboards. If it does, there’s real value here beyond just being a nicer UI.</p>
]]></description><pubDate>Wed, 21 Jan 2026 03:59:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46700923</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46700923</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46700923</guid></item><item><title><![CDATA[New comment by jaynamburi in "AI startup Humans& raises $480M at $4.5B valuation in seed round"]]></title><description><![CDATA[
<p>A $480M “seed” at a $4.5B valuation is extraordinary by any historical standard. It would be interesting to understand what’s being labeled as “seed” here whether this is effectively a late stage round with a seed label, or if there’s something fundamentally different about the company’s capital needs or traction. Metrics like revenue, customers, or defensibility would help ground the valuation discussion. Without that context, it’s hard to tell whether this reflects genuine step-change progress in AI or simply continued capital concentration around a small number of perceived winners.</p>
]]></description><pubDate>Wed, 21 Jan 2026 03:49:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=46700863</link><dc:creator>jaynamburi</dc:creator><comments>https://news.ycombinator.com/item?id=46700863</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46700863</guid></item></channel></rss>