<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: siscia</title><link>https://news.ycombinator.com/user?id=siscia</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 20:17:20 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=siscia" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by siscia in "Launch HN: Freestyle – Sandboxes for Coding Agents"]]></title><description><![CDATA[
<p>It is not clear to me how much CPU I get.<p>"Unlimited" as in 8vCPU and then I am billed for it on consumption?</p>
]]></description><pubDate>Tue, 07 Apr 2026 00:35:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47669242</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=47669242</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47669242</guid></item><item><title><![CDATA[New comment by siscia in "Show HN: Revise – An AI Editor for Documents"]]></title><description><![CDATA[
<p>There is a lot of positive comments in this comments section that I don't mind being a bit rough.<p>I think we can do much better.<p>The workflow of copy to chatgpt and getting feedback is just the first step, and honestly not that useful.<p>What I would love to see is a tool that makes my writing and thinking clearer.<p>Does this sentence makes sense? Does the conclusion I am reaching follows from what I am saying? Is this period useful or I am just repeating something I already said? Can I re-arrange my wording to make my point clear? Are my wording actually clear? Or am I not making sense?<p>Can I re-arrange my essay so that it is simpler to follow?</p>
]]></description><pubDate>Sun, 22 Mar 2026 21:13:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=47482211</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=47482211</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47482211</guid></item><item><title><![CDATA[New comment by siscia in "Kotlin creator's new language: talk to LLMs in specs, not English"]]></title><description><![CDATA[
<p>I think you guys are doing pretty much everything right.</p>
]]></description><pubDate>Thu, 12 Mar 2026 21:11:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47357161</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=47357161</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47357161</guid></item><item><title><![CDATA[New comment by siscia in "Kotlin creator's new language: talk to LLMs in specs, not English"]]></title><description><![CDATA[
<p>Another trick that I use.<p>I force the code to be almost 100% dependency injection-able.<p>It simplifies a lot writing tests and getting the coverage. And I see the LLM being able to handle it very very well.</p>
]]></description><pubDate>Thu, 12 Mar 2026 21:09:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47357144</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=47357144</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47357144</guid></item><item><title><![CDATA[New comment by siscia in "Kotlin creator's new language: talk to LLMs in specs, not English"]]></title><description><![CDATA[
<p>Yes it is passable.<p>Good enough that I don't review it.<p>Granted, it is a personal project that I care only to the point that I want it to work. There are no money on the line. Nothing professional.<p>I believe that part of the secret is that I force CC to run the whole est suites after it change ANY file. Using hooks.<p>It makes iteration slower because it kinda forces it to go from green to green. Or better from red to less red (since we start in red).<p>But overall I am definitely happy with the results.<p>Again, personal projects. Not really professional code.</p>
]]></description><pubDate>Thu, 12 Mar 2026 21:05:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=47357105</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=47357105</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47357105</guid></item><item><title><![CDATA[New comment by siscia in "Kotlin creator's new language: talk to LLMs in specs, not English"]]></title><description><![CDATA[
<p>What I found more useful is an extra step. Spec to tests, and then red tests to code and green tests.<p>LLMs works on both translation steps. But you end up with an healthy amount of tests.<p>I tagged each tests with the id of the spec so I do get spec to test coverage as well.<p>Beside standard code coverage given by the tests.</p>
]]></description><pubDate>Thu, 12 Mar 2026 20:28:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=47356596</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=47356596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47356596</guid></item><item><title><![CDATA[The Model, the Chat and the Application]]></title><description><![CDATA[
<p>Article URL: <a href="https://slowtechred.substack.com/p/the-model-the-chat-and-the-application">https://slowtechred.substack.com/p/the-model-the-chat-and-the-application</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47168563">https://news.ycombinator.com/item?id=47168563</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 26 Feb 2026 16:49:21 +0000</pubDate><link>https://slowtechred.substack.com/p/the-model-the-chat-and-the-application</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=47168563</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47168563</guid></item><item><title><![CDATA[New comment by siscia in "Show HN: Emdash – Open-source agentic development environment"]]></title><description><![CDATA[
<p>I just made an app that read GitHub issues. If they have a specific tag, the agent in the background creates a plan.<p>If they have another tag, the agent in the server creates a PR considering the whole issue conversation as context (with the idea that you used the plan above - but technically you don't have to.)<p>If you comment in the PR the agent start again loading your comment as context and trying to address it.<p>Everything is already in git and GitHub, so it automatically pick up your CI.<p>It seems simpler, but I am sure I missed something.</p>
]]></description><pubDate>Wed, 25 Feb 2026 05:26:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=47147666</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=47147666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47147666</guid></item><item><title><![CDATA[New comment by siscia in "GitHub Agentic Workflows"]]></title><description><![CDATA[
<p>I am somehow close to what MSFT and GitHub are doing here, mostly because I believe it is a great idea, and I am experimenting on it myself.<p>Especially on the angle of automatic/continuos improvement (<a href="https://github.github.io/gh-aw/blog/2026-01-13-meet-the-workflows-continuous-simplicity/" rel="nofollow">https://github.github.io/gh-aw/blog/2026-01-13-meet-the-work...</a>)<p>Often code is seen as an artifact, that it is valuable by itself. This was an incomplete view before, and it is now a completely wrong view.<p>What is valuable is how code encode the knowledge of the organization building it.<p>But what it is even more valuable, is that knowledge itself. Embedded into the people of the organization.<p>Which is why continuos and automatic improvement of a codebase is so important. We all know that code rot with time/features requests.<p>But at the same time, abruptly change the whole codebase architecture destroys the mental model of the people in the organization.<p>What I believe will work, is a slow stream of small improvements - stream that can be digested by the people in the organization.<p>In this context I find more useful to mix and control deterministic execution with a sprinkle of intelligence on top.
So a deterministic system that figure out what is wrong - with whatever definition of wrong that makes sense.
And then LLMs to actually fix the problem, when necessary.</p>
]]></description><pubDate>Sun, 08 Feb 2026 18:28:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=46937052</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46937052</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46937052</guid></item><item><title><![CDATA[New comment by siscia in "Photos capture the breathtaking scale of China's wind and solar buildout"]]></title><description><![CDATA[
<p>What's the hard part?</p>
]]></description><pubDate>Thu, 15 Jan 2026 13:35:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=46632309</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46632309</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46632309</guid></item><item><title><![CDATA[New comment by siscia in "Let's be honest, Generative AI isn't going all that well"]]></title><description><![CDATA[
<p>I think that the wider industry is living right now what was coding and software engineering around 1 year or so ago.<p>Yeah you could ask ChatGPT or Claude to write code, but it wasn't really there.<p>It needs a while to adopt the model AND the UI. As in software are the first one because we are both makers and users.</p>
]]></description><pubDate>Wed, 14 Jan 2026 03:19:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46611891</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46611891</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46611891</guid></item><item><title><![CDATA[DevX to BotX]]></title><description><![CDATA[
<p>Article URL: <a href="https://slowtechred.substack.com/p/devx-vs-botx">https://slowtechred.substack.com/p/devx-vs-botx</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46547017">https://news.ycombinator.com/item?id=46547017</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 08 Jan 2026 21:52:37 +0000</pubDate><link>https://slowtechred.substack.com/p/devx-vs-botx</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46547017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46547017</guid></item><item><title><![CDATA[New comment by siscia in "Fighting Fire with Fire: Scalable Oral Exams"]]></title><description><![CDATA[
<p>In general when you try a new tool or methodology you tend to start with a small class to see the results first.</p>
]]></description><pubDate>Sat, 03 Jan 2026 16:22:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46478412</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46478412</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46478412</guid></item><item><title><![CDATA[New comment by siscia in "Tally – A tool to help agents classify your bank transactions"]]></title><description><![CDATA[
<p>This is not an AI tool, this is a CLI that has very verbose output and documentation.<p>It can be used by human or by AI agents.<p>I experiment the same with other mechanisms, and CLI are as effective - if not more effective - than MCP.<p>Granted, having access to AI I would use AI to run it. But nothing is stopping a manual, human centric, use.<p>I believe more tools should be written like that.</p>
]]></description><pubDate>Sat, 03 Jan 2026 15:38:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46477861</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46477861</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46477861</guid></item><item><title><![CDATA[New comment by siscia in "Tally – A tool to help agents classify your bank transactions"]]></title><description><![CDATA[
<p>You don't need AI for it.<p>You can just install the tool and use it. It is a CLI with very verbose output. Verbose output is good for both humans and AI.</p>
]]></description><pubDate>Sat, 03 Jan 2026 15:34:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=46477824</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46477824</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46477824</guid></item><item><title><![CDATA[New comment by siscia in "Fighting Fire with Fire: Scalable Oral Exams"]]></title><description><![CDATA[
<p>The perspective from an educator is quite concerning indeed.<p>Students are very simply NOT doing the work that is require to learn.<p>Before LLMs, homeworks were a great way to force students to approach the material. Students did not have any other way to get an answer, so they were forced to study and come up with an answer to the homeworks. They could always copy from classmates, but that was considered quite negatively.<p>LLMs change this completely. Any kind of homework you could assign undergraduates classes are now completed in less than 1 second, for free, by LLMs.<p>We start to see PERFECT homeworks submitted by students who could not get a 50% grade in classes. Overall grades went down.<p>This is a common pattern with all the educators I have been talking with. Not a single one has a different experience.<p>And, I do understand students. They are busy, they may not feel engaged by all the classes, and LLMs are a way too fast solution for getting homeworks done and free up some time.<p>But it is not helping them.<p>Solutions like this are to force students to put the correct amount of work in their education.<p>And I would love if all of this would not be necessary. But it is.<p>I come from an engineering school in Europe - we simply did not have homework. We had frontal classes and one big final exams. Courses in which only 10% of the class would pass were not uncommon.<p>But today education, especially in the US, is different.<p>This is not forcing student to use LLMs. We are trying to force student to think and do the right thing for them.<p>And I know it sounds very paternalistic - but if you have better ideas, I am open.</p>
]]></description><pubDate>Sat, 03 Jan 2026 04:24:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=46472783</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46472783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46472783</guid></item><item><title><![CDATA[New comment by siscia in "Fighting Fire with Fire: Scalable Oral Exams"]]></title><description><![CDATA[
<p>I created something similar, but instead of final oral examination, we do homework.<p>The student is supposed to submit a whole conversation with an LLMs.<p>The LLM is prompted to answer a question or resolve a problem, and the LLM is there to assist. 
The LLM is instructed to never reveal the answer.<p>More interesting is the concept that the whole conversation is available to the instructor for grading.
So if the LLMs makes mistake, or give away the solution, or if the student prompt engineer around it. It is all there and the instructor can take the necessary corrective measures.<p>87% of the students quite liked it, and we are looking forward to doubling the students that will be using it next quarter.<p>Overall, we are looking for more instructor to use it. So if you are interested in it please get in touch.<p>More info on: <a href="https://llteacher.blogspot.com/" rel="nofollow">https://llteacher.blogspot.com/</a></p>
]]></description><pubDate>Sat, 03 Jan 2026 03:21:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=46472492</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46472492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46472492</guid></item><item><title><![CDATA[New comment by siscia in "Ask HN: How can I get better at using AI for programming?"]]></title><description><![CDATA[
<p>I will be crucified by this, but I think you are doing it wrong.<p>I would split it in 2 steps.<p>First, just move it to svelte, maintain the same functionality and ideally wrap it into some tests. As mentioned you want something that can be used as pass/no-pass filter. As in yes, the code did not change the functionality.<p>Then, apply another pass from Svelte bad quality to Svelte good quality.
Here the trick is that "good quality" is quite different and subjective. I found the models not quite able to grasp what "good quality" means in a codebase.<p>For the second pass, ideally you would feed an example of good modules in your codebase to follow and a description of what you think it is important.</p>
]]></description><pubDate>Sat, 13 Dec 2025 20:49:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=46257880</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46257880</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46257880</guid></item><item><title><![CDATA[New comment by siscia in "Show HN: Burner-Query S3 logs without cold starts/egress fees(Rust+WASM)"]]></title><description><![CDATA[
<p>I am toying with something similar.<p>However my approach would be to use duckdb and S3 over lambda.<p>Leaving many of the concerns to the infrastructure. Like basically no OOM. No need to manage servers.</p>
]]></description><pubDate>Sun, 30 Nov 2025 14:09:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=46096784</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46096784</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46096784</guid></item><item><title><![CDATA[New comment by siscia in "Implications of AI to schools"]]></title><description><![CDATA[
<p>With my partner we have been working to invert the overall model.<p>She started grading conversation than the students have with LLMs.<p>From the question that the students ask, it is obvious who knows the material and who is struggling.<p>We do have a custom setup, so that she creates an homework. There is a custom prompt to avoid the LLM answering the homework question. But thats pretty much it.<p>The results seems promising, with students spending 30m or so going back and forth with the LLMs.<p>If any educator wants to Ty or is interested in more information, let me know and we can see how we collaborate.</p>
]]></description><pubDate>Tue, 25 Nov 2025 06:47:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=46043012</link><dc:creator>siscia</dc:creator><comments>https://news.ycombinator.com/item?id=46043012</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46043012</guid></item></channel></rss>