<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: michalwarda</title><link>https://news.ycombinator.com/user?id=michalwarda</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 15 Apr 2026 00:12:21 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=michalwarda" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Issues for OpenCode skyrocketed in the past month]]></title><description><![CDATA[
<p>Article URL: <a href="https://github-history.com/anomalyco/opencode&openclaw/openclaw&openai/codex?metrics=net">https://github-history.com/anomalyco/opencode&openclaw/openclaw&openai/codex?metrics=net</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46841500">https://news.ycombinator.com/item?id=46841500</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 31 Jan 2026 22:23:21 +0000</pubDate><link>https://github-history.com/anomalyco/opencode&amp;openclaw/openclaw&amp;openai/codex?metrics=net</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=46841500</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46841500</guid></item><item><title><![CDATA[New comment by michalwarda in "Everyone is comparing GitHub stars. But what about issues?"]]></title><description><![CDATA[
<p>I've noticed how the opencode team has around 100 issues opening every day and I wondered how it is compared to other big repos in history. So I've created <a href="https://github-history.com" rel="nofollow">https://github-history.com</a> to look at it.<p>Hope you'll like it!</p>
]]></description><pubDate>Sat, 31 Jan 2026 21:40:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46841146</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=46841146</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46841146</guid></item><item><title><![CDATA[Everyone is comparing GitHub stars. But what about issues?]]></title><description><![CDATA[
<p>Article URL: <a href="https://github-history.com/anomalyco/opencode&openclaw/openclaw&facebook/react&vercel/next.js?showClosed=true">https://github-history.com/anomalyco/opencode&openclaw/openclaw&facebook/react&vercel/next.js?showClosed=true</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46841145">https://news.ycombinator.com/item?id=46841145</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Sat, 31 Jan 2026 21:40:43 +0000</pubDate><link>https://github-history.com/anomalyco/opencode&amp;openclaw/openclaw&amp;facebook/react&amp;vercel/next.js?showClosed=true</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=46841145</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46841145</guid></item><item><title><![CDATA[Show HN: The Doorstep – Voice RPG]]></title><description><![CDATA[
<p>Hey, we've created an interactive game generated fully with AI sounds.
You can play it @ <a href="https://qforge.studio/the-doorstep" rel="nofollow">https://qforge.studio/the-doorstep</a><p>There's also a builder if you want to build games like that yourself @ <a href="https://qforge.studio" rel="nofollow">https://qforge.studio</a>. For now it's completely free and no login required so you can just jump in and build. Would love some feedback!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46168025">https://news.ycombinator.com/item?id=46168025</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 05 Dec 2025 22:08:08 +0000</pubDate><link>https://qforge.studio/the-doorstep</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=46168025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46168025</guid></item><item><title><![CDATA[New comment by michalwarda in "We built 270 realistic chats so agents don't freeze while tools run"]]></title><description><![CDATA[
<p>AI shouldn’t block the conversation just because a tool is busy. To evaluate that behavior properly, we needed good data—so we made it.<p>AsyncTool is a Hugging Face dataset of 270 high‑quality, multi‑turn (and I mean up to 60 turns) conversations where the assistant keeps talking while tools work in the background. Each case is different, grounded in real JSON‑Schema tool definitions, and the tool calls/results are consistent and make sense with no fabricated states or magical shortcuts.<p>What’s inside
- 18 scenario templates × 15 renders = 270 conversations.
- Conversations run 10–30 “in‑world” minutes with filler chat, retries, status checks, and out‑of‑order returns.
- Every row includes messages, tools, and meta so you can replay transcripts, inspect schemas, and trace provenance.
- Protocol features: <tool_ack /> placeholders, -FINAL handoffs, mixed sync/async chains, transient failures, and fatal‑error surfacing.
- License: Apache‑2.0.<p>We’re exploring how agents can ack now, answer later - waiting for the right signal (last relevant tool result vs. last user question) while staying natural and helpful. This dataset gives you supervised signals to:
- finetune assistants that acknowledge async work without hallucinating tool states,
- build guardrails/regression tests for routers juggling retries and reordered responses,
- evaluate “answered at the right time” behavior.<p>We’re also publishing the generator so you can reproduce or extend everything locally. If you’re building tool‑using agents - or just tired of UIs that freeze—this should help you train, test, and iterate faster.<p>Built with Torque → <a href="https://usetorque.dev/" rel="nofollow">https://usetorque.dev/</a></p>
]]></description><pubDate>Mon, 10 Nov 2025 21:30:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=45881204</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45881204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45881204</guid></item><item><title><![CDATA[We built 270 realistic chats so agents don't freeze while tools run]]></title><description><![CDATA[
<p>Article URL: <a href="https://huggingface.co/datasets/qforge/AsyncTool">https://huggingface.co/datasets/qforge/AsyncTool</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45881203">https://news.ycombinator.com/item?id=45881203</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 10 Nov 2025 21:30:17 +0000</pubDate><link>https://huggingface.co/datasets/qforge/AsyncTool</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45881203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45881203</guid></item><item><title><![CDATA[New comment by michalwarda in "React for Datasets"]]></title><description><![CDATA[
<p>We’ve spent a while trying to generate a very specific, complex, multi‑turn conversation dataset. Most tools pushed us toward glue scripts and one‑off pipelines that were hard to review or reuse. Torque is our attempt to make this boring and predictable: a small, MIT‑licensed framework that treats dataset generation like building a UI. You define small, declarative “components” and compose them into pipelines. The goal is clear code and repeatable runs, not another heavy DSL.
It’s early and open source. We’d love feedback on the API design, examples you’d like to see, and rough edges we should fix first.
Docs and code: <a href="https://github.com/qforge-dev/torque" rel="nofollow">https://github.com/qforge-dev/torque</a></p>
]]></description><pubDate>Thu, 06 Nov 2025 17:24:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45837670</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45837670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45837670</guid></item><item><title><![CDATA[React for Datasets]]></title><description><![CDATA[
<p>Article URL: <a href="https://usetorque.dev">https://usetorque.dev</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45837642">https://news.ycombinator.com/item?id=45837642</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 06 Nov 2025 17:21:13 +0000</pubDate><link>https://usetorque.dev</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45837642</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45837642</guid></item><item><title><![CDATA[Show HN: Torque – A declarative, typesafe DSL for LLM training datasets (MIT)]]></title><description><![CDATA[
<p>We were frustrated with dataset generation DX: ad‑hoc scripts and JSON templates break as flows branch, tool‑calls drift, and reproducibility is hard.<p>So we built Torque: a schema‑first, declarative, fully typesafe DSL to generate conversational datasets.<p>What it is:
 Declarative DSL - Compose conversations like React components
 Fully Typesafe - Zod schemas with complete type inference
 Provider Agnostic - Generate with any AI SDK provider (OpenAI, Anthropic, DeepSeek, vLLM, LLaMA.cpp etc.)
 AI-Powered Content - Generate realistic varied datasets automatically without complicated scripts
 Faker Integration - Built-in Faker.js with automatic seed synchronization for reproducible fake data
 Cache Optimized - Reuses context across generations to reduce costs
 Prompt Optimized - Concise, optimized structures, prompts and generation workflow lets you use smaller, cheaper models
Quick example
 Concurrent Generation - Beautiful async CLI with real-time progress tracking while generating concurrently<p>import { generateDataset, generatedUser, generatedAssistant, oneOf, assistant } from “@qforge/torque”;
import { openai } from “@ai-sdk/openai”;<p>await generateDataset(
  () => [
    generatedUser({ prompt: “Friendly greeting” }),
    oneOf([assistant({ content: “Hello!” }), generatedAssistant({ prompt: “Respond to greeting” })]),
  ],
  { count: 2, model: openai(“gpt-5-mini”), seed: 42 }
);<p>Links
• GitHub: <a href="https://github.com/qforge-dev/torque" rel="nofollow">https://github.com/qforge-dev/torque</a>
• Try in browser: <a href="https://stackblitz.com/github/qforge-dev/torque/tree/main/stackblitz-templates/quick-start" rel="nofollow">https://stackblitz.com/github/qforge-dev/torque/tree/main/st...</a>
• npm: <a href="https://www.npmjs.com/package/@qforge/torque" rel="nofollow">https://www.npmjs.com/package/@qforge/torque</a><p>What feedback would help most:
• What dataset would you like us to create / recreate?
• Do you like the API? Any suggestions on how to change it?<p>License: MIT.<p>Happy to answer questions in the thread!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45791822">https://news.ycombinator.com/item?id=45791822</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 02 Nov 2025 17:16:27 +0000</pubDate><link>https://github.com/qforge-dev/torque</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45791822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45791822</guid></item><item><title><![CDATA[New comment by michalwarda in "Show HN: Coyote – Wildly Real-Time AI"]]></title><description><![CDATA[
<p>It can also set reminders and do stuff when you're not looking and talks to you by itself rather than always being triggered by the user message.</p>
]]></description><pubDate>Thu, 23 Oct 2025 22:38:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=45688275</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45688275</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45688275</guid></item><item><title><![CDATA[New comment by michalwarda in "Show HN: Coyote – Wildly Real-Time AI"]]></title><description><![CDATA[
<p>Fully understand the WhatsApp part. Do you use any other communicators? Discord, Signal or something else? We are looking for more "interfaces".<p>When it comes to async that's exactly what we are trying to "solve". Right now models are built in a way that they expect tool results right after tool calls.<p>We are building datasets and a model with asynchronous interface as it's core. <a href="https://huggingface.co/qforge/Qwen3-14B-AT" rel="nofollow">https://huggingface.co/qforge/Qwen3-14B-AT</a> you can read more about it here.<p>For tooling attached to it right now we are using Pipedream integrations and are planning to move to an open source public solution configurable by users so u can set whatever you want.<p>So imagine you have a single chat interface that steers a fleet of other agents in the background for code. But not by handing off the memory but navigating the tasks in an async way.</p>
]]></description><pubDate>Thu, 23 Oct 2025 22:36:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=45688259</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45688259</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45688259</guid></item><item><title><![CDATA[New comment by michalwarda in "Show HN: Coyote – Wildly Real-Time AI"]]></title><description><![CDATA[
<p>Right now it can:
-  handle real tasks in the background — emails, calendar stuff, research, finding info, organizing data
-   chat naturally without feeling like you're talking to a bot
-    remember context and keep conversations flowing
-    work with integrations (gmail, calendar, docs, maps, etc.) so it can actually do stuff, not just talk about it
-    multi-task — you can ask it multiple things and it can handle them in parallel also if u mispronounce anything it can update the existing stuff that is happening.</p>
]]></description><pubDate>Thu, 23 Oct 2025 22:30:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45688198</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45688198</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45688198</guid></item><item><title><![CDATA[New comment by michalwarda in "Show HN: Coyote – Wildly Real-Time AI"]]></title><description><![CDATA[
<p>It's like you're talking to a real person. No stop buttons, no waiting. Interrupt, add details, or change direction anytime. Just like a natural conversation.<p>Context here means any additional information.</p>
]]></description><pubDate>Thu, 23 Oct 2025 18:00:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=45684842</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45684842</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45684842</guid></item><item><title><![CDATA[Show HN: Coyote – Wildly Real-Time AI]]></title><description><![CDATA[
<p>Hey all, we just shipped coyote. it's an ai assistant but built different — everything runs async and feels way more natural. You text it, it handles work in the background and you can keep talking to it. No more stop button.
Instead of creating another app we put it in WhatsApp (iMessage coming soon) so you can just text it for free and get the feeling.
The core idea: most ai assistants make you sit there waiting for an answer. coyote's like texting a friend. you ask it to grab something for you, they say "on it," and you just keep chatting while it's out getting it. no awkward silence, no being stuck.
Built it to handle real tasks — emails, calendar stuff, research, whatever. all non-blocking. Everything happens concurrently so you're never left hanging.
We're still early but it's live and working. Happy to answer questions or get feedback.
We've also worked hard to make it snappy, and friendly. Try it out and would love some feedback! Thanks!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45683928">https://news.ycombinator.com/item?id=45683928</a></p>
<p>Points: 7</p>
<p># Comments: 11</p>
]]></description><pubDate>Thu, 23 Oct 2025 16:38:55 +0000</pubDate><link>https://getcoyote.app</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45683928</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45683928</guid></item><item><title><![CDATA[New comment by michalwarda in "I made Poke.com email me its system prompt lol"]]></title><description><![CDATA[
<p>I was wondering how bitchy the prompt was. Turns out other pretty weird stuff was there.</p>
]]></description><pubDate>Mon, 15 Sep 2025 17:04:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=45252206</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=45252206</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45252206</guid></item><item><title><![CDATA[The UX of Good Reading]]></title><description><![CDATA[
<p>Article URL: <a href="https://blog.michalprzadka.com/posts/good-reading/">https://blog.michalprzadka.com/posts/good-reading/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44347790">https://news.ycombinator.com/item?id=44347790</a></p>
<p>Points: 3</p>
<p># Comments: 4</p>
]]></description><pubDate>Sun, 22 Jun 2025 15:39:15 +0000</pubDate><link>https://blog.michalprzadka.com/posts/good-reading/</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=44347790</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44347790</guid></item><item><title><![CDATA[New comment by michalwarda in "Show HN: MCP powered cross platform Superwhisper"]]></title><description><![CDATA[
<p>It supports Local and Cloud models for Transcription and we're super close in releasing LLM integration with multiple providers including local ones.</p>
]]></description><pubDate>Fri, 30 May 2025 20:27:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=44139589</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=44139589</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44139589</guid></item><item><title><![CDATA[New comment by michalwarda in "MCP powered cross platform Siri"]]></title><description><![CDATA[
<p>We've released a new version of qSpeak v0.1.47.<p>Our vision on the future of computer use through voice assistance. Available everywhere in your system to do whatever you'd like including advanced agent based tool usage.<p>We're having a completely free beta now so hope you'll like it and test it out!</p>
]]></description><pubDate>Fri, 30 May 2025 20:12:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=44139513</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=44139513</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44139513</guid></item><item><title><![CDATA[Show HN: MCP powered cross platform Superwhisper]]></title><description><![CDATA[
<p>Article URL: <a href="https://qspeak.app">https://qspeak.app</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44139512">https://news.ycombinator.com/item?id=44139512</a></p>
<p>Points: 5</p>
<p># Comments: 3</p>
]]></description><pubDate>Fri, 30 May 2025 20:12:04 +0000</pubDate><link>https://qspeak.app</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=44139512</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44139512</guid></item><item><title><![CDATA[New comment by michalwarda in "Show HN: QSpeak – An alternative for WisprFlow supporting local LLMs and Linux"]]></title><description><![CDATA[
<p>Yea, it's not been easy. We're working on supporting more and more distros.</p>
]]></description><pubDate>Thu, 15 May 2025 15:34:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=43996133</link><dc:creator>michalwarda</dc:creator><comments>https://news.ycombinator.com/item?id=43996133</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43996133</guid></item></channel></rss>