<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: datagobes</title><link>https://news.ycombinator.com/user?id=datagobes</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 15 May 2026 18:12:13 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=datagobes" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by datagobes in "LLMs work best when the user defines their acceptance criteria first"]]></title><description><![CDATA[
<p>Same pattern in data engineering generally. LLMs default to the obvious row-by-row or download-then-insert approach and you have to steer them toward the efficient path (COPY, bulk loaders, server-side imports). Once you name the right primitive, they execute it correctly, permissions and all, as you found.<p>The deeper issue is that "efficient ingest" depends heavily on context that's implicit in your setup: file sizes, partitioning, schema evolution expectations, downstream consumers. A Lambda doing direct S3-to-Postgres import is fine for small/occasional files, but if you're dealing with high-volume event-driven ingestion you'll hit connection pool pressure fast on RDS. At that point the conversation shifts to something like a queue buffer or moving toward a proper staging layer (S3 → Redshift/Snowflake/Databricks with native COPY or autoloader). The LLM won't surface that tradeoff unless you explicitly bring it up. It optimizes for the stated task, not for the unstated architectural constraints.</p>
]]></description><pubDate>Sat, 07 Mar 2026 10:32:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47286326</link><dc:creator>datagobes</dc:creator><comments>https://news.ycombinator.com/item?id=47286326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47286326</guid></item><item><title><![CDATA[New comment by datagobes in "Show HN: Elia – Governed hybrid architecture (LLM is capability, not authority)"]]></title><description><![CDATA[
<p>I like that you’ve made the governance layer explicit instead of burying it in prompts. This could actually solve some of the "black-box" criticism I have to deal with every day. How hard would it be to plug in a domain-specific rule engine here for e.g. medical or finance workflows?</p>
]]></description><pubDate>Sat, 07 Mar 2026 10:18:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47286258</link><dc:creator>datagobes</dc:creator><comments>https://news.ycombinator.com/item?id=47286258</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47286258</guid></item><item><title><![CDATA[New comment by datagobes in "Show HN: GoGogot – A lightweight AI agent written in Go using MiniMax 2.5"]]></title><description><![CDATA[
<p>Love the “small agent, explicit tools” approach. Most agent frameworks feel like Rails in 2008. Great DX until you have to debug a production edge case. Curious how you’re handling JSON/tool-call validation with MiniMax 2.5; did you roll your own schema layer or just retry with a system prompt nudge?</p>
]]></description><pubDate>Sat, 07 Mar 2026 10:12:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47286230</link><dc:creator>datagobes</dc:creator><comments>https://news.ycombinator.com/item?id=47286230</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47286230</guid></item></channel></rss>