<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: PAndreew</title><link>https://news.ycombinator.com/user?id=PAndreew</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 16:04:42 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=PAndreew" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by PAndreew in "OpenAI's o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors"]]></title><description><![CDATA[
<p>I mean an LLM is a slightly stirred up soup of current human knowledge. It has an advantage in quantity of accumulated data and maybe connecting seemingly less connected parts of that data - but not reliably. The human has an advantage (for now) in data collection (seeing, hearing sensing the patient), actual agency, real world experiences and getting the useful data out of the stirred up soup. Both human and LLM are susceptible to bias and harmful influence. Let’s simply isolate them in the diagnostic process and then compare their output. Human collects data -> both human and LLM evaluate independently -> compare the results -> human may get new insights -> final diagnosis by human.</p>
]]></description><pubDate>Sun, 03 May 2026 23:36:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=48002783</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=48002783</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48002783</guid></item><item><title><![CDATA[Show HN: Currant – Anonymus social media for NON-AI agents]]></title><description><![CDATA[
<p>I was once having a bad day and wanted to ventillate about the peculiarities of corporate life. Then I realised I don't trust these sites. Neither my blog. I didn't want to link these thoughts to an account - my stream of thoughts. I also needed comfort from real human beings - not gen AI bots. Don't get me wrong I engineered/hacked/conjured? this stuff together with the help of LLMs. I think gen AI CAN be a net positive. Yet, I don't want to interact with agents if they are being pushed on me. So I created Currant. This is basically a wannabe feed of posts. What hopefully makes it a bit different are the following things:
1) No accounts. You can create posts, comment without an identity. You can create hashtags, you can write down your phone number and address if you want, but you're not pushed to do that. You can prevent others to comment if you wish.
2) Proof-of-work AI/ripoff detection. Currant uses a WYSWYG text editor that records everyhting you do in that hundred by hundred pixel area - what you paste, how fast you type, how you format your content - and stores it in a hashed logfile. It can be replayed by anyone. :) If the detector thinks you're an AI your content will be rejected. Will it prevent AI use alltogether? No. But hopefully it will throttle it. Will it produce some false positives? Absolutely. You might wonder - isn't this against the anonym thingy? Well, in theory I can imagine that some sort of profile could be built based on all the content you have ever typed. But it's still not a cookie that sends all your click data to 87565 "trusted" partners.
3) Content expiry. By default everything you post will be deleted after 1 month. You can set it to 1 hour, 1 day ... or never. But if you don't want, the site will not store your manifested thoughts forever.
4) Customisation. It's a bit silly, but you have control over several stylistic things about your post - background colors, gradients, border radius. Maybe useless but style can be a vehicle to express your feelings and identity.<p>It's an experiment and I really appreciate if you give it a chance! <3<p>Ps.: If you do try it and have any feedback or suggestion you can use the "Contact" submenu. The software currently has 68% statement coverage.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47621306">https://news.ycombinator.com/item?id=47621306</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 02 Apr 2026 23:00:37 +0000</pubDate><link>https://currantfeed.cc</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=47621306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47621306</guid></item><item><title><![CDATA[New comment by PAndreew in "How the AI Bubble Bursts"]]></title><description><![CDATA[
<p>This^^ Use both, they have their own strengths and weaknesses.</p>
]]></description><pubDate>Tue, 31 Mar 2026 05:55:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47583259</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=47583259</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47583259</guid></item><item><title><![CDATA[New comment by PAndreew in "Show HN: Cq – Stack Overflow for AI coding agents"]]></title><description><![CDATA[
<p>I think one partial solution could be to actually spin up a remote container with dummy data (that can be easily generated by an LLM) and test the claim. With agents it can be done very quickly. After the claim has been verified it can be published along with the test configuration.</p>
]]></description><pubDate>Tue, 24 Mar 2026 09:34:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47500326</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=47500326</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47500326</guid></item><item><title><![CDATA[New comment by PAndreew in "Garry Tan's Claude Code Setup"]]></title><description><![CDATA[
<p>LOC is a very-very weak proxy of "how many new features" I've built, and they don't have any other metric that can be measured easily. But it causes serious issues, because equating LOC with productivity leads to inevitable utter bloat, that no agent or human can ever rectify in a meaningful timeframe. I'm pretty sure this 600 000??? LOC could be shrinked to 60 K for the same feature set, but with better readability and performance.</p>
]]></description><pubDate>Wed, 18 Mar 2026 07:55:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47422865</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=47422865</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47422865</guid></item><item><title><![CDATA[New comment by PAndreew in "How I write software with LLMs"]]></title><description><![CDATA[
<p>Others have already partially answered this, but here’s my 20 cents. Software development really is similar to architecture. The end result is an infrastructure of unique modules with different type of connectors (roads, grid, or APIs). Until now in SW dev the grunt work was done mostly by the same people who did the planning, decided on the type of connectors, etc. Real estate architects also use a bunch of software tools to aid them, but there must be a human being in the end of the chain who understands human needs, understands - after years of studying and practicing - how the whole building and the infrastructure will behave at large and who is ultimately responsible for the end result (and hopefully rewarded depending on the complexity and quality of the end result). So yes we will not need as many SW engineers, but those who remain will work on complex rewarding problems and will push the frontier further.</p>
]]></description><pubDate>Mon, 16 Mar 2026 06:38:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=47395771</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=47395771</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47395771</guid></item><item><title><![CDATA[New comment by PAndreew in "Don't post generated/AI-edited comments. HN is for conversation between humans"]]></title><description><![CDATA[
<p>Very well put.</p>
]]></description><pubDate>Thu, 12 Mar 2026 08:44:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47348038</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=47348038</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47348038</guid></item><item><title><![CDATA[New comment by PAndreew in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>Managed BYOK stateless agent orchestrator called BeeZee: <a href="https://beezyai.net/" rel="nofollow">https://beezyai.net/</a>. Basically Claude Cowork / a coding agent on the web but provider agnostic, you own the data and you can connect several nodes to it. Instead of installing an agent for all your machines you have one master agentic server and executor nodes. The server is stateless the data lives on the nodes and in a managed database. I use Supabase and Google KMS so my auth keys are encrypted. Uses Pi agent under the hood. This enables me to code from my phone without a dedicated SSH terminal and without the need to babysit the agent. I describe the feature, off it goes, I close my phone and in 10 mins the results are there. Also using it to support my wife with white collar stuff like Excel analysis, translation, etc. It's a bit buggy but getting better.</p>
]]></description><pubDate>Mon, 09 Mar 2026 09:08:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=47306535</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=47306535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47306535</guid></item><item><title><![CDATA[Show HN: Pi-Foundry – multi user self hosted AI assistant]]></title><description><![CDATA[
<p>I wanted to solve my wife's work-related pain points revolving around Excel formulas and extracting data from PDFs documents by keeping her data locally. I also wanted to be able to collaborate with her if the need arises. So I've been working on this multi user virtual assistant based on the OpenCode SDK that we host on a Raspberry Pi behind Tailscale. The workflow in short
 - users upload their work files through the web interface, 
 - add them to a chat session, 
 - invite other users, 
 - they churn the data with the agent, 
 - then download the resulting assets.
It has some rough edges especially around handling multi-turn conversations but thought maybe someone could find this useful. :) If you like it, please star and contribute to the project.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46796766">https://news.ycombinator.com/item?id=46796766</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 28 Jan 2026 15:40:24 +0000</pubDate><link>https://github.com/PAndreew/pi-foundry</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=46796766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46796766</guid></item><item><title><![CDATA[New comment by PAndreew in "Cowork: Claude Code for the rest of your work"]]></title><description><![CDATA[
<p>I'm also building something similar although my approach is a bit different. Wanna team up/share some insights?</p>
]]></description><pubDate>Wed, 14 Jan 2026 08:55:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=46613834</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=46613834</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46613834</guid></item><item><title><![CDATA[Show HN: Vigil – AI Chatbot Data Leak Mitigation in the Browser]]></title><description><![CDATA[
<p>As a developer I've caught myself copy-pasting large chunks of text into AI chatbot input fields. From time-to-time API keys, private URLs, etc. slipped through and probably baked into one of the nicest and shiniest models weights - forever. So I created a little browser extension that tries to mitigate such accidents by intercepting the pasted data and redacting it with structure-preserving but dummy values. The tool is heavily a work in progress, but I still find it useful, and only mildly intrusive. Enjoy!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45992319">https://news.ycombinator.com/item?id=45992319</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 20 Nov 2025 13:26:56 +0000</pubDate><link>https://github.com/PAndreew/vigil_vite</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=45992319</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45992319</guid></item><item><title><![CDATA[New comment by PAndreew in "Ask HN: What are you working on? (September 2025)"]]></title><description><![CDATA[
<p>I have a lawyer friend who’s complaining about versioning… but she also mentioned that Word is the de facto standard. What I’m trying to say is that this should be a Word addon.</p>
]]></description><pubDate>Thu, 02 Oct 2025 20:04:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=45454822</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=45454822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45454822</guid></item><item><title><![CDATA[Show HN: Docustore – Vectorized Technical Documentations]]></title><description><![CDATA[
<p>docustore's aim is to provide up-to-date, off-the shelf and plug-and-play context for LLMs from a curated list of frameworks/sdks. It has a 4 step pipeline: scrape the documentation - clean it - vectorize it - package it. My vision is to host it somewhere and develop an API/MCP around it so it will be development-environment agnostic.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45061840">https://news.ycombinator.com/item?id=45061840</a></p>
<p>Points: 9</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 29 Aug 2025 09:06:27 +0000</pubDate><link>https://github.com/PAndreew/docustore</link><dc:creator>PAndreew</dc:creator><comments>https://news.ycombinator.com/item?id=45061840</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45061840</guid></item></channel></rss>