<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: e7h4nz</title><link>https://news.ycombinator.com/user?id=e7h4nz</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 22 Apr 2026 08:44:32 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=e7h4nz" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by e7h4nz in "What if database branching was easy?"]]></title><description><![CDATA[
<p>We had a similar journey with Neon's branching. Initially it was a huge win for our CI workflows — spinning up an isolated, production-shaped database per PR made migration testing and integration checks dramatically more realistic than seed fixtures ever were.<p>That said, we've since pulled back from branching production schemas, and the reason is data masking. In principle you can define masking rules for sensitive columns, but in practice it's very hard to build a process that guarantees every new column, table, or JSON field added by any engineer is covered before it ever touches a branch. The rules drift, reviews miss things, and nothing in the workflow hard-fails when a new sensitive field slips through.<p>Most of the time that's fine. But "most of the time" isn't the bar for customer data — a single oversight leaking PII into a developer environment is enough to do real damage to trust, and you can't un-leak it. Until masking can be enforced by construction rather than by convention, we'd rather pay the cost of synthetic data than accept that risk.</p>
]]></description><pubDate>Mon, 20 Apr 2026 13:06:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=47833744</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47833744</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47833744</guid></item><item><title><![CDATA[New comment by e7h4nz in "Peter Steinberger – WhatsApp CLI: sync, search, send"]]></title><description><![CDATA[
<p>If AI agents can proficiently use whatsapp I would assume that two-thirds of the people chatting with me in my contacts are actually just bots messaging me.</p>
]]></description><pubDate>Wed, 15 Apr 2026 07:49:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47775917</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47775917</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47775917</guid></item><item><title><![CDATA[New comment by e7h4nz in "Agents of Chaos"]]></title><description><![CDATA[
<p>In this problem domain, I believe humanity is still in a very early stage. What we can do is treat the agent and its operating environment as a "black box" and audit all incoming and outgoing network request traffic.<p>This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow</p>
]]></description><pubDate>Tue, 31 Mar 2026 03:02:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582276</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47582276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582276</guid></item><item><title><![CDATA[New comment by e7h4nz in "Bitwarden integrates with OneCLI agent vault"]]></title><description><![CDATA[
<p>Did you actually read this article or try to understand what OneCLI does?</p>
]]></description><pubDate>Mon, 30 Mar 2026 16:21:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=47576284</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47576284</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47576284</guid></item><item><title><![CDATA[New comment by e7h4nz in "Spring Boot Done Right: Lessons from a 400-Module Codebase"]]></title><description><![CDATA[
<p>Removed microservices and RPC, keep all TypeScript codes in a single monorepo. Avoiding `any` and using ts-rest automatically keep types synchronized between the web and frontend applications.<p>This has made my life much easier.</p>
]]></description><pubDate>Mon, 30 Mar 2026 15:23:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47575485</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47575485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47575485</guid></item><item><title><![CDATA[New comment by e7h4nz in "Spring Boot Done Right: Lessons from a 400-Module Codebase"]]></title><description><![CDATA[
<p>I worked on a core Spring Boot project for five or six years at a very large enterprise. In my opinion, the most dangerous thing about this framework is that it makes its core users feel far too self-assured.<p>When looking at problems, your mind becomes consumed with how to force everything into design patterns—like architectural separation, DI, or interface / implementation split. This causes developers to lose sight of the actual essence of the problem because they are obsessed with conforming to the framework.<p>Because the ecosystem and toolchain surrounding Spring Boot and Java are so mature and well-supported, it is very easy to find community tools that make you feel like you are doing things the "right way."<p>I only realized these issues after I left Spring Boot and Java development behind. Now, I much prefer using TypeScript or Python to write code (for example, web servers).<p>I also prefer using various SaaS solutions to handle authentication and user registration rather than rebuilding it all myself with Spring Boot Security. I honestly never want to go back to the days of writing Java again.</p>
]]></description><pubDate>Mon, 30 Mar 2026 14:25:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47574789</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47574789</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47574789</guid></item><item><title><![CDATA[New comment by e7h4nz in "Hardware Image Compression"]]></title><description><![CDATA[
<p>Agreed.<p>we hit some wired case on Adreno 530, ran into bizarre GPU instruction set issues with the compute shader compressor, that only manifested on Adreno 53x. Ended up having to add a device detection path, and fall back to CPU compression. which defeated much of the point.</p>
]]></description><pubDate>Mon, 30 Mar 2026 09:11:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47572113</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47572113</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47572113</guid></item><item><title><![CDATA[New comment by e7h4nz in "HD Audio Driver for Windows 98SE / Me"]]></title><description><![CDATA[
<p>I really resonated with the end of README.md.<p>I have a huge stack of embedded development boards at home—all kinds of SBCs, microcontrollers, FPGAs, and more. Over the years, as a hobbyist, I've bought them consistently. Overall, I’ve definitely bought much more than I’ve actually used.<p>Before LLMs came around, the friction in using them involved dealing with various compiled manuals and spec requirements. Just setting up the environment could consume the entire focus of the project before I even started writing the actual logic or code.<p>Now, I use cc to specifically handle those inner layers that aren't really part of the creative design. When it comes to what I actually want to build, the most interesting part is the creative process. I use cc to handle the environmental noise, things like linking errors, driver conflicts, register initialization failures, etc, so I can focus on the work itself.<p>LLM for researching and writing the code, while human handles the architectural decisions. This seems like the correct way to divide the work.</p>
]]></description><pubDate>Mon, 30 Mar 2026 08:26:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47571837</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47571837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47571837</guid></item><item><title><![CDATA[New comment by e7h4nz in "VHDL's Crown Jewel"]]></title><description><![CDATA[
<p>On a practical level, you're right, most of my team's work is done in Verilog.<p>That being said, I still have a preference for the VHDL simulation model. A design that builds correctness directly into the language structure is inherently more elegant than one that relies on coding conventions to constrain behavior.</p>
]]></description><pubDate>Mon, 30 Mar 2026 06:22:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47571026</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47571026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47571026</guid></item><item><title><![CDATA[New comment by e7h4nz in "Hardware Image Compression"]]></title><description><![CDATA[
<p>The irony of hardware image compression is that the devices that need it most are typically older, bandwidth-constrained SoCs. However, these are precisely the devices that do not support modern formats.<p>Technologies like ARM AFRC and PVRIC4 can only be used on modern flagship devices. Since flagship memory bandwidth isn't particularly strained to begin with, we end up spending a massive amount of effort on optimizations that only benefit a fraction of users. In most cases, teams are simply unwilling to pay that development cost.<p>The driver behavior of PVRIC4 perfectly encapsulates the current state of mobile GPU development:
1. The API promises support for flexible compression ratios.
2. The driver silently ignores your request and defaults to 1:2 regardless.
3. You only discover this because a PowerVR developer quietly confirmed it in a random comment section.<p>This is a microcosm of the "texture compression hell" we face. Beyond the mess of format fragmentation, even the driver layer is now fragmented. You can't trust the hardware, and you can't trust the software.<p>While the test results for ARM AFRC are genuinely impressive—it's not easy to outperform a software encoder in terms of quality—it remains problematic. As long as you cannot guarantee consistent behavior for a single codebase across different vendors, real-time CPU and GPU encoders remain the only pragmatic choice.<p>For now, hardware compression encoders are just "nice-to-haves" rather than reliable infrastructure. I am curious if anyone has used AFRC in a production environment? If so, I’d love to know how your fallback strategy was designed.</p>
]]></description><pubDate>Mon, 30 Mar 2026 06:06:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=47570930</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47570930</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47570930</guid></item><item><title><![CDATA[New comment by e7h4nz in "VHDL's Crown Jewel"]]></title><description><![CDATA[
<p>The Delta Cycle logic is actually quite similar to functional reactive programming. It separates how a value changes from when a process responds to that change.<p>VHDL had this figured out as early as 1987. I spent many years writing Verilog test benches and chasing numerous race conditions; those types of bugs simply don't exist in VHDL.<p>The Verilog rules—using non-blocking assignments for sequential logic and blocking assignments for combinational logic—fail as soon as the scenario becomes slightly complex. Verilog is suitable when you already have the circuit in your head and just need to write it down quickly. In contrast, VHDL forces you to think about concurrent processes in the correct way. While the former is faster to write, the latter is the correct approach.<p>Even though SystemVerilog added some patches, the underlying execution model still has inherent race conditions.</p>
]]></description><pubDate>Mon, 30 Mar 2026 05:54:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47570858</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=47570858</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47570858</guid></item><item><title><![CDATA[New comment by e7h4nz in "Vm0"]]></title><description><![CDATA[
<p>hi, I'm VM0's developer. Happy to answer your questions.<p>I fully agree, without clear architecture docs, I wouldn't trust an infra service either. We're working on technical documentation now.<p>Here is some quick summaris about our arch, we uses E2B's managed sandbox (Firecracker microVMs), and keep working on our own Firecracker runner implementation (independent of E2B) with experimental network firewall features.<p>We use E2B because easy to start, no infrastructure needed, but self-hosted give developer full control, custom security policies, run on your own infra.<p>We're at an early stage and planning to release end of Jan. Detailed architecture docs are coming soon. Feedback welcome!</p>
]]></description><pubDate>Tue, 20 Jan 2026 02:48:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=46687332</link><dc:creator>e7h4nz</dc:creator><comments>https://news.ycombinator.com/item?id=46687332</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46687332</guid></item></channel></rss>