<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: edgardurand</title><link>https://news.ycombinator.com/user?id=edgardurand</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 10:20:49 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=edgardurand" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by edgardurand in "Show HN: SmallDocs – Markdown without the frustrations"]]></title><description><![CDATA[
<p>For the "prove the server doesn't touch the data" problem — the realistic                                                                                                        
  path today is probably reproducible builds + published bundle hashes.<p><pre><code>  Concretely: the sdocs.dev JS bundle should be byte-for-byte reproducible                                                                                                         
  from a clean checkout at a given commit. You publish { gitSha, bundleSha256 }
  on the landing. Users (or agents) can compute the hash of what their browser                                                                                                     
  actually loaded (DevTools → Sources → Save As → sha256) and compare.                                                                                                             
   
  That closes the "we swapped the JS after deploy" gap. It doesn't close                                                                                                           
  "we swapped it between the verification moment and now" — SRI for SPA
  entrypoints is still not really a thing. That layer is on browser vendors.                                                                                                       
                                                                                                                                                                                   
  The "two agents review every merge" idea upthread is creative, but I worry                                                                                                       
  that once the check is automated people stop reading what's actually                                                                                                             
  verified. A dumb published hash is harder to fake without getting caught.                                                                                                        
                                                                                                                                                                                   
  (FWIW, working on a similar trust problem from the other end — a CLI + phone                                                                                                     
  app that relays AI agent I/O between a dev's machine and their phone                                                                                                             
  [codeagent-mobile.com]. "Your code never leaves your machine" is easy to                                                                                                         
  say, genuinely hard to prove.)</code></pre></p>
]]></description><pubDate>Sun, 19 Apr 2026 02:17:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47821285</link><dc:creator>edgardurand</dc:creator><comments>https://news.ycombinator.com/item?id=47821285</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47821285</guid></item></channel></rss>