<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: BrandiATMuhkuh</title><link>https://news.ycombinator.com/user?id=BrandiATMuhkuh</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 20 Apr 2026 05:39:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=BrandiATMuhkuh" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by BrandiATMuhkuh in "Show HN: Baton – A desktop app for developing with AI agents"]]></title><description><![CDATA[
<p>Very cool. And congrats on the launch.<p>I started to use superset 2 days ago. Which seems similar. It's pretty nice: <a href="https://superset.sh">https://superset.sh</a><p>Fyi: here are some things I would like to have for such a tool
- notification when an agent is done
- each tabs/space has its own terminal, browser, agent
- each tab/space runs in a sandbox (eg docker)
- each tab/space can run my dev server. But must not conflict with the other dev servers running 
- each tab/space has a mcp server for the built in browser<p>Nice to have: 
- remote access against my machine/tabs
- being able to make screenshots</p>
]]></description><pubDate>Wed, 01 Apr 2026 14:26:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=47601398</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=47601398</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47601398</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "NanoClaw Adopts OneCLI Agent Vault"]]></title><description><![CDATA[
<p>I'm curious how you manage https. If OneCLI intercepts all traffic from the agent (harness/tools/...) and then replaces parts with other data, it should break https.<p>Or is that a man in the middle "attack". And users have to install a certificate?</p>
]]></description><pubDate>Tue, 24 Mar 2026 21:09:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=47509360</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=47509360</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47509360</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Show HN: Local-First Linux MicroVMs for macOS"]]></title><description><![CDATA[
<p>Very cool. Was looking for something like this for a new project of mine. (I'm working on a project that is like a marriage of retool+OpenClaw. It's used by SME to quickly build inhouse apps)</p>
]]></description><pubDate>Sun, 22 Feb 2026 22:58:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=47115724</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=47115724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47115724</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Rari – Rust-powered React framework"]]></title><description><![CDATA[
<p>Pretty cool work! 
Question: what's the difference in using BUN? I'm currently using buns react frontend/backend system. And afaik it's also written in rust.<p>The part that I don't see are the 'use server' .</p>
]]></description><pubDate>Fri, 13 Feb 2026 08:35:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47000405</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=47000405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47000405</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Ask HN: Who wants to be hired? (February 2026)"]]></title><description><![CDATA[
<p><p><pre><code>   Location: Linz/Austria/EU 
   Remote: OK (Any timezone OK)
   Willing to relocate: NO 
   Technologies: AI (LLM, RAG, Agents, Voice, AI-SDK, Mastra), Typescript, React, Next.js, Supabase, Firebase, Postgres, GCP
   Résumé: https://brandstetter.io/Resume_Brandstetter_Jurgen.pdf
   Email: j@brandstetter.io
   LinkedIn: https://www.linkedin.com/in/j-brandstetter
   Salary: USD 150k / year
</code></pre>
I'm an AI engineer and senior full-stack developer with an aspiration for a team/product lead or CTO-like position. For ~7 years, I was co-founder and CTO of an ed-tech company called amy.app. My co-founder and I scaled the company to about 25 employees. Before that, I earned a PhD in Human-Robot Interaction in NZ and Oxford. I've used the past year contracting, after winding down my company, to learn and apply as much as possible about AI solutions. I'm talking; Agentic enterprise RAG Search, LLM based email automation, AI call agent, invoice processing.
Currently, I'm designing and working on an agentic enterprise search engine for the AEC industry and with a YC company on IMS automation.<p>I'm very product/customer-focused and pragmatic. I like to move super fast (not only because of Cursor :D).<p>My dream environment is an early-stage, fully remote startup.<p>PS: I have a family, which means the main compensation via shares isn't an option for me.</p>
]]></description><pubDate>Mon, 09 Feb 2026 16:16:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=46946892</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=46946892</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46946892</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Ask HN: Share your personal website"]]></title><description><![CDATA[
<p><a href="https://brandstetter.io/" rel="nofollow">https://brandstetter.io/</a>
Super outdated (last update ~ 10y ago). Still counts</p>
]]></description><pubDate>Wed, 14 Jan 2026 22:10:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=46624431</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=46624431</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46624431</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Launch HN: Poly (YC S22) – Cursor for Files"]]></title><description><![CDATA[
<p>Congratulations on the launch.<p>One customer type that will absolutely love that are architecture studios. Basically all data they generate (drawing, plans, presentation) live either on SharePoint or NFS (or some other file system like ACC).<p>If you can provide for them a solution they can host on their private cloud (air-gapped) you have an enterprise-deal.<p>Why air-gapped
- many documents they generate are secret (military or competition. Think next largest tower in the world)
- copy right: for example the German norm body DIN does not allow you to use documents from them and add them into an LLM (I'm not talking training, I simply talk RAG) except!!! if you keep it in a private cloud like Azure.<p>Why do I know that: because I worked on that problem @howie.systems</p>
]]></description><pubDate>Fri, 21 Nov 2025 07:50:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=46002187</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=46002187</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46002187</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Should LLMs just treat text content as an image?"]]></title><description><![CDATA[
<p>I'm using this approach quite often. I don't know of any documents created by humans for humans that have no formatting. 
The formatting, position etc. are usually an important part of the document.<p>Since the first multimodal llms came out, I'm using this approach when I deal with documents. It makes the code much simpler because everything is an image and it's surprisingly robust.<p>Works also for embeddings (cohere embed v4)</p>
]]></description><pubDate>Mon, 27 Oct 2025 19:01:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=45724958</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=45724958</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45724958</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Launch HN: Grapevine (YC S19) – A company GPT that actually works"]]></title><description><![CDATA[
<p>Congratulations on the launch.<p>I was recently trying to tackle the same problem (@howie.systems). The hardest 2 problems we had to face were ACL and large files (and large volumes).<p>How did you solve the ACL part? I worked with a customer that had 200k pdf/images/dwg files on SharePoint and other 1M on samba. It took like a week to sync it all and keep tabs on the access rights of each employee.<p>How did you solve unpredictable large files: a pdf 2000pages, maybe some A0 in the mix. Or some 4GB power point presentations?<p>PS: great fan of gather. 
PPS: say hi to Clinton from me (amy.app) if he is still around. He was our mentor back in New Zealand at the flux accelerator (2016)</p>
]]></description><pubDate>Mon, 06 Oct 2025 21:36:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45496594</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=45496594</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45496594</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Launch HN: Slashy (YC S25) – AI that connects to apps and does tasks"]]></title><description><![CDATA[
<p>Congratulations on the launch. 
I think it's a smart move to not use MCP here. Because your LLM really needs to understand how the different integrations work together.<p>Question: you say you do semantics search. If I understand correctly that means you must somehow index all data (Gmail, GDrive, ...) otherwise the AI would have to "download/scan" thousands of files each time you ask a question. 
So how do you do the indexing?<p>For some background: I'm working on something similar. My clients are architects. They have about 300k files for just one building. With an added 50k issues and a couple of thousand emails. And don't forget all subcontractors.<p>Would Slashy be able to handle that?</p>
]]></description><pubDate>Thu, 04 Sep 2025 20:40:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=45131976</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=45131976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45131976</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Ask HN: Who is hiring? (September 2025)"]]></title><description><![CDATA[
<p>Howie.systems (EU VC funded) | Full-Stack (AI) Engineer | ONSITE Vienna, Austria | Full-time | Start date: ASAP<p>- We’re building Howie.systems, an AI platform for the architecture/engineering/construction industry. Our focus: automating knowledge extraction and retrieval from large document sets (100k+ files, multi-tenant, multi-user, with Supabase/Postgres + pgvector under the hood). We’re fully funded by international investors and are expanding our team in Vienna.<p>Role: We’re looking for a Full-Stack Engineer with a love for strong typing and AI.<p>- Must: Next.js, Supabase, very TypeScript-safe mindset<p>- Super plus: experience with AI frameworks (Vercel AI SDK, Mastra, LangChain, etc.)<p>- Should have good experience with AI Coding tools like Cursor, Cloud Code<p>You’ll help us scale our ingestion, RAG, and AI interaction layers into production-grade tools dealing with enterprise customers<p>What we offer:<p>- Competitive salary + equity possible Onsite in Vienna (no remote option)<p>- Small, hands-on team with big international backing Work on a fully modern stack (TS, Supabase, Vercel, AI frameworks)<p>If this sounds like you, send your CV and a cover letter to: <i>contact@howie.systems</i> Please include within the cover letter, detailing your experience and why you think you would be a fit for the role</p>
]]></description><pubDate>Mon, 01 Sep 2025 17:05:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=45094499</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=45094499</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45094499</guid></item><item><title><![CDATA[GeoAI for the Modern JavaScript Developer]]></title><description><![CDATA[
<p>Article URL: <a href="https://docs.geobase.app/geoai-live">https://docs.geobase.app/geoai-live</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45019712">https://news.ycombinator.com/item?id=45019712</a></p>
<p>Points: 4</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 25 Aug 2025 22:10:02 +0000</pubDate><link>https://docs.geobase.app/geoai-live</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=45019712</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45019712</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Git-Annex"]]></title><description><![CDATA[
<p>Does this also work if I have data on SharePoint, DropBox, etc. and want to pull them (sync with local machine)?<p>My use case is mostly ETL related, where I want to pull all customers data (enterprise customer) so I can process them. But also keep the data updated, hence pull?</p>
]]></description><pubDate>Mon, 25 Aug 2025 12:38:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45013193</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=45013193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45013193</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Show HN: Chroma Cloud – serverless search database for AI"]]></title><description><![CDATA[
<p>I'll for sure take a deeper look. 
Ingestion has been by far the biggest pain and least fun. 
Those infra parts hold us back from the cool things -> building agents/search</p>
]]></description><pubDate>Wed, 20 Aug 2025 06:32:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=44959237</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=44959237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44959237</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Show HN: Chroma Cloud – serverless search database for AI"]]></title><description><![CDATA[
<p>Chroma looks cool. Congratulations on the Cloud version.<p>For my client I've "built" a similar setup with Supabase + pgVector and I give the AI direct SQL access.<p>Here is the hard part:
Just last week did I index 1.2 million documents for one project of one customer. 
They have pdfs with 1600 pages or PPTX files of >4GB. Plus lots of 3D/2D architecture drawings in proprietary formats.<p>The difficulty I see is
- getting the data in ETL. This takes days and is fragile 
- keep RBAC 
- Supabase/pgVector needs lots of resources when adding new rows to the index -> wish the resources scale up/down automatically. Instead of having to monitor and switch to the next plan.<p>How could chroma help me here?</p>
]]></description><pubDate>Wed, 20 Aug 2025 05:22:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=44958868</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=44958868</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44958868</guid></item><item><title><![CDATA[The first conference for TypeScript AI developers]]></title><description><![CDATA[
<p>Article URL: <a href="https://mastra.ai/conf">https://mastra.ai/conf</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44848227">https://news.ycombinator.com/item?id=44848227</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 09 Aug 2025 17:08:35 +0000</pubDate><link>https://mastra.ai/conf</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=44848227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44848227</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "So you want to parse a PDF?"]]></title><description><![CDATA[
<p>I have started treading everything as images when multimodal LLMs appeared. Even emails. It's so much more robust. 
Especially emails are often used as a container to send a PDF (e.g. a contract) that contains an image of a contract that was printed. Very very common.<p>I have just moved my company's RAG indexing to images and multimodal embedding. Works pretty well.</p>
]]></description><pubDate>Mon, 04 Aug 2025 19:16:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=44790227</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=44790227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44790227</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Ask HN: Who wants to be hired? (August 2025)"]]></title><description><![CDATA[
<p><p><pre><code>   Location: Linz/Austria/EU 
   Remote: OK (Any timezone OK)
   Willing to relocate: NO 
   Technologies: AI (LLM, RAG, Agents, Voice, AI-SDK), Typescript, React, Next.js, Supabase, Firebase, Postgres, GCP
   Résumé: https://brandstetter.io/Resume_Brandstetter_Jurgen.pdf
   Email: j@brandstetter.io
   LinkedIn: https://www.linkedin.com/in/j-brandstetter
   Salary: USD 150k / year</code></pre>
I'm an AI engineer and senior full-stack developer with an aspiration for a team/product lead or CTO-like position. For ~7 years, I was co-founder and CTO of an ed-tech company called amy.app. My co-founder and I scaled the fully-remote company to about 25 employees. Before that, I earned a PhD in Human-Robot Interaction in NZ and Oxford. I've used the past year contracting, after winding down my company, to learn and apply as much as possible about AI solutions. I'm talking; RAG (Multiple TB), Agentic-Search, Deep-Research, LLM based email automation, AI-tools, MCP, Vibe-Coding,...
Currently, I'm designing and working on an agentic enterprise search engine for the AEC industry for a customer (Supabase + Vercel AI SDK, Mastra, Multi-Modal Embedding, etc.)<p>I'm very product/customer-focused and pragmatic. I like to move super fast (not only because of Cursor :D).<p>My dream environment is an early-stage, fully remote startup.<p>PS: I have a family, which means the main compensation via shares isn't an option for me.</p>
]]></description><pubDate>Fri, 01 Aug 2025 15:33:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=44758344</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=44758344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44758344</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Ask HN: What is so good about MCP servers?"]]></title><description><![CDATA[
<p>Those things are not mutually exclusive. We use RAG and Vector stores to index terabyte of data. 
Then use tools calls (MCP) to allow the AI to write SQL to directly query the data (vector store).</p>
]]></description><pubDate>Fri, 25 Jul 2025 08:39:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=44681018</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=44681018</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44681018</guid></item><item><title><![CDATA[New comment by BrandiATMuhkuh in "Global hack on Microsoft Sharepoint hits U.S., state agencies, researchers say"]]></title><description><![CDATA[
<p>I was just building a SharePoint integration for some enterprise customers (I do RAG on their data) and I find it brutal, that now, I have access to all their SharePoint data for all SharePoint sites. Even the ones I don't want to index. And I even use user login over admin/service key login.<p>AFAIK, the Oauth claims of SharePoint don't allow specifying particular projects only. 
(BTW: same counts for platforms like ACC/BIM360)</p>
]]></description><pubDate>Tue, 22 Jul 2025 06:06:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=44643668</link><dc:creator>BrandiATMuhkuh</dc:creator><comments>https://news.ycombinator.com/item?id=44643668</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44643668</guid></item></channel></rss>