<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: syntex</title><link>https://news.ycombinator.com/user?id=syntex</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 06 May 2026 08:26:39 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=syntex" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by syntex in "DeepClaude – Claude Code agent loop with DeepSeek V4 Pro"]]></title><description><![CDATA[
<p>Thanks! I wasn't aware of Jido or ReqLLM before. ReqLLM looks especially promising, and I will likely use it. At the moment, I'm only integrated with OpenRouter.</p>
]]></description><pubDate>Mon, 04 May 2026 13:26:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=48008467</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=48008467</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48008467</guid></item><item><title><![CDATA[New comment by syntex in "DeepClaude – Claude Code agent loop with DeepSeek V4 Pro"]]></title><description><![CDATA[
<p>Its semi public, but I probably publish it soon once its less embarrassing.<p>Its an Elixir agent runtime with a thin Go TUI (bubble-tea). Im building it mostly to explore agent orchestration: planner/workers/finalizer flows, local file/code-edit tools, MCP tools, permission gates, run context, compaction, and eventually larger swarms. Erlang/Elixir is interesting for this because the actor/supervision model maps pretty naturally to lots of isolated agents and long-running supervised tasks.<p>As i said, The main lesson so far is that everything around contracts is much more fragile than I expected unless you use a very strong model. Planners return Markdown instead of JSON, tools get called with subtly wrong args, subagents repeat broken tool calls, finalizers lie about success after workers failed. And various permissions may be interpreted by agents in unexpexted way<p>I also started with too many modes too early instead of making agentic path extremely solid. That made me understand better why these codebases become huge: there are endless corner cases if you want a harness to work across models, providers, tools...<p>Stronger models hide a lot of harness weakness and weaker models expose. Making weaker models good enough requires a surprising amount of contract hardening. But that hardening tends to make the system better for stronger models too.<p>Also elixir http stack was causing a lot of problems (needed to use gun eventually)</p>
]]></description><pubDate>Mon, 04 May 2026 12:36:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=48007922</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=48007922</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48007922</guid></item><item><title><![CDATA[New comment by syntex in "DeepClaude – Claude Code agent loop with DeepSeek V4 Pro"]]></title><description><![CDATA[
<p>Not sure you can replace Claude with DeepSeek V4 that easily and have same results.<p>From what I see while building my own agentic system in Elixir, the problem is in training for your specific harness/contracts. Claude/GPT-style models seem to be trained around very specific contracts used by the harness like tool call formats, planning structure, patching, reading files, recovering from errors, and knowing when to stop.<p>In practice, you either need a very strong general model that can infer and follow those contracts (expensive), or a weaker model that has been fine-tuned / trained specifically on your own agent contracts. Otherwise, the whole thing becomes flaky very quickly. And I suspect with Deepseek V4 you may get last options.</p>
]]></description><pubDate>Mon, 04 May 2026 08:39:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=48006112</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=48006112</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48006112</guid></item><item><title><![CDATA[New comment by syntex in "Kimi K2.6 just beat Claude, GPT-5.5, and Gemini in a coding challenge"]]></title><description><![CDATA[
<p>These benchmarks means very little. The real test is model + harness so agentic system that can fulfill given goals.</p>
]]></description><pubDate>Sun, 03 May 2026 10:51:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=47995630</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=47995630</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47995630</guid></item><item><title><![CDATA[New comment by syntex in "Ternary Bonsai: Top Intelligence at 1.58 Bits"]]></title><description><![CDATA[
<p>hallucinates in pretty much every answer</p>
]]></description><pubDate>Tue, 21 Apr 2026 06:19:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47845234</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=47845234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47845234</guid></item><item><title><![CDATA[Will consciousness be the only thing humans have left?]]></title><description><![CDATA[
<p>I have been thinking about how we slowly give our mental tasks to technology. Each major invention takes a job away from our brains:<p>Writing -> We stopped needing to remember everything (outsourced memory)<p>Printing Press -> We stopped needing to copy knowledge by hand<p>Calculators -> We stopped doing math in our heads<p>The Internet -> We stopped needing to "know" facts. We just look them up<p>LLMs -> We are starting to give away reasoning and combining ideas<p>What is next?<p>Soon, AI will likely handle:<p>- Planning<p>- Ideas<p>- Execution<p>What remains for us?<p>If AI does the "thinking" and the "doing," maybe humans are left with only:<p>- Experiencing: Feeling what it is like to be alive<p>- Choosing: Deciding what we actually want<p>- Valuing: Deciding what is important or "good."<p>- Being aware: Just being a witness to the process<p>And not sure if these are "safe" human roles? Or will AI eventually take over "choosing" and "valuing" too? If an algorithm knows what you want before you do, is our consciousness still in control?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46968298">https://news.ycombinator.com/item?id=46968298</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Tue, 10 Feb 2026 23:05:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=46968298</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=46968298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46968298</guid></item><item><title><![CDATA[New comment by syntex in "Prism"]]></title><description><![CDATA[
<p>Yes, I did it as a joke inspired by the PRISM release. But unexpectedly, it makes a good point. And the funny part for was that the paper lists only LLMs as authors.<p>Also, in a world where AI output is abundant, we humans become the scarce resource the "tools" in the system that provide some connectivity to reality (grounding) for LLM</p>
]]></description><pubDate>Wed, 28 Jan 2026 08:48:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=46792703</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=46792703</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46792703</guid></item><item><title><![CDATA[New comment by syntex in "Prism"]]></title><description><![CDATA[
<p>The Post-LLM World: Fighting Digital Garbage <a href="https://archive.org/details/paper_20260127/mode/2up" rel="nofollow">https://archive.org/details/paper_20260127/mode/2up</a><p>Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts.
New unit of measurement proposed: verification debt.
Also introduces: Recursive Garbage → model collapse<p>a little joke on Prism)</p>
]]></description><pubDate>Tue, 27 Jan 2026 23:08:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=46788478</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=46788478</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46788478</guid></item><item><title><![CDATA[New comment by syntex in "Building a Minimal Viable Armv7 Emulator from Scratch"]]></title><description><![CDATA[
<p>I see that author decorating webiste for Christmas :)</p>
]]></description><pubDate>Fri, 21 Nov 2025 15:09:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=46005277</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=46005277</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46005277</guid></item><item><title><![CDATA[New comment by syntex in "Magistral — the first reasoning model by Mistral AI"]]></title><description><![CDATA[
<p>I didn't downvote. T the problem with the paper is that it asks the model to output all moves for, say, 15 disks  2 ^ 15 - 1 = 32767<p>32767 moves in a single prompt. That's not testing reasoning. That’s testing whether the model can emit a huge structured output without error, under a context window limit.<p>The authors then treat failure to reproduce this entire sequence as evidence that the model can't reason. But that’s like saying a calculator is broken because its printer jammed halfway through printing all prime numbers under 10000.<p>For me o3 returning Python code isn’t a failure. It’s a smart shortcut. The failure is in the benchmark design. This benchmark just smells.</p>
]]></description><pubDate>Tue, 10 Jun 2025 17:30:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=44239260</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=44239260</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44239260</guid></item><item><title><![CDATA[New comment by syntex in "Magistral — the first reasoning model by Mistral AI"]]></title><description><![CDATA[
<p>The illussion of reasoning was terrible paper. 2^n-1 how it could fit in context size. I tried o3 and he gave me python script saying that inserting all moves is to much for context window. completely different results.</p>
]]></description><pubDate>Tue, 10 Jun 2025 14:35:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44237321</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=44237321</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44237321</guid></item><item><title><![CDATA[New comment by syntex in "For $595, you get what nobody else can give you for twice the price (1982) [pdf]"]]></title><description><![CDATA[
<p>The same for me. I only knew how to assign variables, use for loops, if->then, and use poke command. And from this specific point I started thinking about myself as programmer event that the only thing I wrote with C64 basic was a ball moving on the screen. :)</p>
]]></description><pubDate>Sun, 11 May 2025 12:56:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=43953492</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=43953492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43953492</guid></item><item><title><![CDATA[New comment by syntex in "For $595, you get what nobody else can give you for twice the price (1982) [pdf]"]]></title><description><![CDATA[
<p>I think there were alignment programs for the Datasette. It played a constant tone or signal that would show whether the head was properly aligned. I think it was on on cartridge that I didn't have. And actually as a young kid I didn't know about this alignment thing. Learned years later after switching to Amiga 500.</p>
]]></description><pubDate>Sun, 11 May 2025 12:54:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=43953471</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=43953471</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43953471</guid></item><item><title><![CDATA[New comment by syntex in "For $595, you get what nobody else can give you for twice the price (1982) [pdf]"]]></title><description><![CDATA[
<p>I bought my C64 very late - around 1991/1992. It was in Poland where I bought a used one from my friend. Back then, Eastern Europe was a decade behind the Western side of Europe. Two years later, I purchased a used disk drive. So, for two years, I could only run cartridges like Boulder Dash (I managed to synchronize the tape drive properly only once and played "Winter Games"). But from that boredom, I started programming in BASIC, always dreaming about creating the perfect text based game ;p</p>
]]></description><pubDate>Sat, 10 May 2025 22:00:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=43949344</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=43949344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43949344</guid></item><item><title><![CDATA[New comment by syntex in "GPT 4.5 level for 1% of the price"]]></title><description><![CDATA[
<p>cheaper hardware usually means more adoption of the software and then even more demand for hardware</p>
]]></description><pubDate>Sun, 16 Mar 2025 11:58:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=43378318</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=43378318</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43378318</guid></item><item><title><![CDATA[New comment by syntex in "US Ends Support For Ukrainian F-16s"]]></title><description><![CDATA[
<p>I wonder why Poland is still buying F35 and other European countries.</p>
]]></description><pubDate>Sun, 09 Mar 2025 15:32:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=43309947</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=43309947</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43309947</guid></item><item><title><![CDATA[New comment by syntex in "DeepSeek Open Source FlashMLA – MLA Decoding Kernel for Hopper GPUs"]]></title><description><![CDATA[
<p>What i can do with that?</p>
]]></description><pubDate>Mon, 24 Feb 2025 16:31:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=43161449</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=43161449</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43161449</guid></item><item><title><![CDATA[New comment by syntex in "Very Wrong Math"]]></title><description><![CDATA[
<p>just 2piR and then extra h change the result very little fraction. How is that 
counter-intuitive :)</p>
]]></description><pubDate>Sat, 11 Jan 2025 08:28:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=42664314</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=42664314</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42664314</guid></item><item><title><![CDATA[New comment by syntex in "Agents Are Not Enough"]]></title><description><![CDATA[
<p>Why does this have so many upvotes? Is this the current state of research nowadays?</p>
]]></description><pubDate>Thu, 09 Jan 2025 22:54:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42650696</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=42650696</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42650696</guid></item><item><title><![CDATA[New comment by syntex in "Video Chess disassembled and commented"]]></title><description><![CDATA[
<p>You can play this game here free80sarcade.com/atari2600_VideoChess.php</p>
]]></description><pubDate>Thu, 22 Jun 2023 16:15:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=36434454</link><dc:creator>syntex</dc:creator><comments>https://news.ycombinator.com/item?id=36434454</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36434454</guid></item></channel></rss>