<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: jasisz</title><link>https://news.ycombinator.com/user?id=jasisz</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 17:53:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=jasisz" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by jasisz in "Show HN: Aver – a language designed for AI to write and humans to review"]]></title><description><![CDATA[
<p>Tooling tax is a real argument. But “just use macros + linting” usually gives you policy-flavored Rust, not a genuinely different language model. That works if enforcement is the goal; it works much less well if the artifact itself is meant to read differently.</p>
]]></description><pubDate>Thu, 12 Mar 2026 09:22:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348300</link><dc:creator>jasisz</dc:creator><comments>https://news.ycombinator.com/item?id=47348300</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348300</guid></item><item><title><![CDATA[New comment by jasisz in "Show HN: Aver – a language designed for AI to write and humans to review"]]></title><description><![CDATA[
<p>Author here.<p>Aver is an experimental statically typed language for AI-written, human-reviewed code.<p>What’s different is that intent (`?`), explicit effects (`!`), design decisions (`decision`), and behavior checks (`verify`) are part of the source itself rather than split across code, comments, docs, and tests.<p>If you want to evaluate it quickly, I’d suggest these places:<p>- medium-sized example:
<a href="https://github.com/jasisz/aver/tree/main/projects/workflow_engine" rel="nofollow">https://github.com/jasisz/aver/tree/main/projects/workflow_e...</a><p>- Lean proof export for the pure subset:
<a href="https://github.com/jasisz/aver/tree/main/docs/lean.md" rel="nofollow">https://github.com/jasisz/aver/tree/main/docs/lean.md</a><p>- examples of effectful programs / replay:
<a href="https://github.com/jasisz/aver/tree/main/examples/services" rel="nofollow">https://github.com/jasisz/aver/tree/main/examples/services</a><p>The main question I’m testing is whether this deserves to be a language, or whether the same idea should just be tooling and conventions on top of an existing language.</p>
]]></description><pubDate>Wed, 11 Mar 2026 10:54:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47333974</link><dc:creator>jasisz</dc:creator><comments>https://news.ycombinator.com/item?id=47333974</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47333974</guid></item><item><title><![CDATA[Show HN: Aver – a language designed for AI to write and humans to review]]></title><description><![CDATA[
<p>I’ve been building Aver around a simple question:<p>If AI is going to write more of the first draft, what should the source look like for the human reviewer?<p>Aver is an experimental statically typed language and toolchain for AI-written, human-reviewed code.<p>My bet is that source code should carry more than implementation. In most projects, implementation survives in the code, but intent lives in docs, decisions live in ADRs or tickets, and expected behavior lives in tests that may or may not stay close to what they describe.<p>Aver tries to make those parts first-class: explicit method-level effects in function signatures; ? strings for machine-readable function intent; decision blocks for design choices and tradeoffs; colocated verify blocks for pure functions; deterministic record/replay for effectful flows; aver context for compact contract-level module export; aver compile to Rust; and aver proof to Lean 4 for mechanical proof checking of the pure subset.<p>A small pure example:<p><pre><code>  fn charge(account: String, amount: Int) -> Result<String, String>
      ? "Pure charge validation and transaction-id creation."
      match amount
          0 -> Result.Err("Cannot charge zero")
          _ -> Result.Ok("txn-{account}-{amount}")

  verify charge
      charge("alice", 100) => Result.Ok("txn-alice-100")
      charge("bob", 0)     => Result.Err("Cannot charge zero")
</code></pre>
A reviewer can see at a glance what the function is for (`?`) and a machine-checkable example of expected behavior (`verify`).<p>An effectful wrapper looks like this:<p><pre><code>  fn chargeAndPrint(account: String, amount: Int) -> Result<Unit, String>
      ? "Effectful wrapper around charge."
        "Prints the transaction id on success."
      ! [Console.print]
      result = charge(account, amount)
      match result
          Result.Ok(txn) ->
              Console.print(txn)
              Result.Ok(Unit)
          Result.Err(err) ->
              Result.Err(err)
</code></pre>
The `!` makes side effects part of the signature rather than hidden inside the implementation.<p>Aver is intentionally opinionated: no exceptions, no null, no `if`/`else`, no loops, no closures. Branching goes through `match`, failure through `Result`, absence through `Option`, and side effects are explicit.<p>The repo includes small examples, but also `projects/workflow_engine`, which is my attempt at a medium-sized auditable application core with app/domain/infra split, replayable effectful flows, and verify-driven pure logic.<p>This is still early. I’m not claiming everyone should replace mainstream languages with Aver.<p>The narrower question I’m testing is whether making intent, decisions, checks, and effect boundaries machine-readable inside the source makes AI-produced code easier to review, constrain, and trust.<p>I’d especially like feedback on whether this feels like a language worth existing, or whether the same idea should just be conventions and tooling on top of an existing language.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47333971">https://news.ycombinator.com/item?id=47333971</a></p>
<p>Points: 7</p>
<p># Comments: 4</p>
]]></description><pubDate>Wed, 11 Mar 2026 10:53:42 +0000</pubDate><link>https://github.com/jasisz/aver</link><dc:creator>jasisz</dc:creator><comments>https://news.ycombinator.com/item?id=47333971</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47333971</guid></item><item><title><![CDATA[New comment by jasisz in "Introduction to Monte Carlo Tree Search"]]></title><description><![CDATA[
<p>From the "An analysis of UCT in multi-player games", Nathan Sturtevant, 2008:
"Multi-player UCT is nearly identical to regular UCT. At the highest level of the algorithm, the tree is repeatedly sampled until it is time to make an action. The sampling process is illustrated in Figure 2. The only difference between this code and a two-player implementation is that in line 5 the average score for player p is used instead of a single average payoff for the state."<p>I think this is kind of a clear statement that original paper (and after it a lot of writing on the topic) may be lacking. Of course people used this simple generalization before and it is pretty straightforward, but it is not that obvious at a first glance.
And I've seen quite a lot of code examples, images explaining UCT for games and articles that were just not saying a word on this. Or even worse - just doing it wrong for multiplayer games.<p>Choice of action is a different topic, as I remember correctly there was also a paper proving that win rate and most robust branch are in the end performing the same ;)<p>Hope you will continue this series, because it is really good and code examples are really nice!</p>
]]></description><pubDate>Sun, 13 Sep 2015 21:13:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=10212803</link><dc:creator>jasisz</dc:creator><comments>https://news.ycombinator.com/item?id=10212803</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10212803</guid></item><item><title><![CDATA[New comment by jasisz in "Introduction to Monte Carlo Tree Search"]]></title><description><![CDATA[
<p>In your example script yes, but basic UCT does not do that - simply because UCT was not meant only for multi-player games in the beginning. And this is some assumption we make about our opponents (actually that they want to maximize their payout, not to e.g. win or minimize our score). Of course this is a very straightforward "application" of UCT or MCTS to the multiplayer games that was done in works of Sturtevant and Cazenave in 2008.<p>But it is not so easy to know about this, e.g. this is a very recent change to wikipedia page on the topic
<a href="https://en.wikipedia.org/w/index.php?title=Monte_Carlo_tree_search&action=historysubmit&type=revision&diff=671256114&oldid=667621950" rel="nofollow">https://en.wikipedia.org/w/index.php?title=Monte_Carlo_tree_...</a> and very often people writing on the topic are not pointing this out at all, which I find very strange and misleading.<p>Also in the classic MCTS you should select move which has most visits, not the one with the highest percentage of wins.</p>
]]></description><pubDate>Sun, 13 Sep 2015 18:25:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=10212281</link><dc:creator>jasisz</dc:creator><comments>https://news.ycombinator.com/item?id=10212281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10212281</guid></item><item><title><![CDATA[New comment by jasisz in "Introduction to Monte Carlo Tree Search"]]></title><description><![CDATA[
<p>This is exactly the problem I've written about - AFAIK basic UCT algorithm does not model your opponent in the selection phase (only in simulation when some heavier than random logic is applied).</p>
]]></description><pubDate>Sun, 13 Sep 2015 15:33:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=10211700</link><dc:creator>jasisz</dc:creator><comments>https://news.ycombinator.com/item?id=10211700</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10211700</guid></item><item><title><![CDATA[New comment by jasisz in "Introduction to Monte Carlo Tree Search"]]></title><description><![CDATA[
<p>But this requires storing and back propagating this info for the other players - something I really haven't seen in any examples (nor in this article). We cannot also assume that game is always zero-sum game and this information is not needed.</p>
]]></description><pubDate>Sun, 13 Sep 2015 13:10:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=10211266</link><dc:creator>jasisz</dc:creator><comments>https://news.ycombinator.com/item?id=10211266</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10211266</guid></item><item><title><![CDATA[New comment by jasisz in "Introduction to Monte Carlo Tree Search"]]></title><description><![CDATA[
<p>This algorithm (in it's regular form used often in games and examples) has one interesting "downside" I was exploring some time ago - selection is performed using the UCB formula. So basically it tries to maximize the player payout. But in the most games this is in fact impractical assumption, because we end up tending to expand branches that will be most likely not chosen by our opponent. As in the example (I assume gray moves are "our" moves) - we will much more likely choose to expand 5/6 branch instead of the 2/4, that will be in fact more likely chosen by our opponent.</p>
]]></description><pubDate>Sun, 13 Sep 2015 09:40:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=10210810</link><dc:creator>jasisz</dc:creator><comments>https://news.ycombinator.com/item?id=10210810</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10210810</guid></item></channel></rss>