<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: concensure</title><link>https://news.ycombinator.com/user?id=concensure</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 11:23:24 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=concensure" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by concensure in "Show HN: Flight-Viz – 10K flights on a 3D globe in 3.5MB of Rust+WASM"]]></title><description><![CDATA[
<p>Did you pay for flight api? Getting comprehensive real time flight data is quite a monetary challenge</p>
]]></description><pubDate>Thu, 02 Apr 2026 00:58:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=47608747</link><dc:creator>concensure</dc:creator><comments>https://news.ycombinator.com/item?id=47608747</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47608747</guid></item><item><title><![CDATA[New comment by concensure in "[dead]"]]></title><description><![CDATA[
<p>The Problem: Most RAG-based coding tools treat code as unstructured text, relying on probabilistic vector search that often misses critical functional dependencies. This leads to the "Edit-Fail-Retry" loop, where the LLM consumes more time and money through repeated failures.<p>The Solution: Semantic uses a local AST (Abstract Syntax Tree) parser to build a Logical Node Graph of the codebase. Instead of guessing what is relevant, it deterministically retrieves the specific functional skeletons and call-site signatures required for a task.
The Shift: From "Token Savings" to "Step Savings"<p>Earlier versions of this project focused on minimizing tokens per call. However, our latest benchmarks show that investing more tokens into high-precision context leads to significantly fewer developer intervention steps.
Latest A/B Benchmark (2026-03-27)<p><pre><code>    Provider: OpenAI (gpt-4o / o1)

    Suite: 11-task core suite (atomic coding tasks)

    Configuration: autoroute_first=true, single_file_fast_path=false
</code></pre>
Run Variant Token Delta (per call) Step Savings (vs Baseline) Task Success
Baseline (2026-03-13) -18.62% — 11/11
Hardened A +8.07% — 11/11
Enhanced (2026-03-27) -6.73% +27.78% 11/11
Key Takeaways:<p><pre><code>    The ROI of Precision: While the "Enhanced" run used roughly 6.73% more tokens than the baseline per request, it required 27.78% fewer steps to reach a successful solution.

    Deterministic Accuracy: By feeding the LLM a "Logical Skeleton" rather than fuzzy similarity-search chunks, we eliminate the "lost in the middle" effect. The agent understands the consequences of an edit before it writes a single line.

    Context Density: We are effectively trading cheap input tokens for expensive developer time and agent compute cycles.
</code></pre>
Detailed breakdowns of the task suite and methodology are available in docs/AB_TEST_DEV_RESULTS.md.</p>
]]></description><pubDate>Tue, 31 Mar 2026 03:50:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582546</link><dc:creator>concensure</dc:creator><comments>https://news.ycombinator.com/item?id=47582546</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582546</guid></item></channel></rss>