<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ewild</title><link>https://news.ycombinator.com/user?id=ewild</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sun, 12 Apr 2026 09:56:19 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ewild" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ewild in "Apideck CLI – An AI-agent interface with much lower context consumption than MCP"]]></title><description><![CDATA[
<p>Ok so in a situation like regular orchestration you would essentially layout all possible steps the LLM can take in your code in a big orchestration layer, and if it hits the sensitive endpoint the orchestration that can occur past that will block off web search. In the design that is. But for something like a manus style agent where you're outsourcing all the work but allowing it to hit your MCP it just becomes a regular API the LLM can call</p>
]]></description><pubDate>Mon, 16 Mar 2026 23:02:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=47406239</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=47406239</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47406239</guid></item><item><title><![CDATA[New comment by ewild in "Apideck CLI – An AI-agent interface with much lower context consumption than MCP"]]></title><description><![CDATA[
<p>I feel like I don't fully understand mcp. I've done research on it but I definitely couldn't explain it. I get lost on the fact that to my knowledge it's a server with API endpoints that are well defined into a json schema then sent the to LLM and the LLM parses that and decides which endpoints to hit (I'm aware some llms use smart calling now so they load the tool name and description but nothing else until it's called). How exactly are you doing the process of stopping the LLM from using web search after it hits a certain endpoint in your MCP server? Or is this referring strictly to when you own the whole workflow where you can then deny websearch capabilities on the next LLM step?<p>Are there any good docs youve liked to learn about it, or good open source projects you used to get familiar? I would like to learn more</p>
]]></description><pubDate>Mon, 16 Mar 2026 18:23:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=47402779</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=47402779</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47402779</guid></item><item><title><![CDATA[New comment by ewild in "Warren Buffett dumps $1.7B of Amazon stock"]]></title><description><![CDATA[
<p>he owns geico</p>
]]></description><pubDate>Wed, 18 Feb 2026 18:32:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47064415</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=47064415</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47064415</guid></item><item><title><![CDATA[New comment by ewild in "Brown/MIT shooting suspect found dead, officials say"]]></title><description><![CDATA[
<p>i was at the cvs right next to the extra storage when the helicopters showed up and all the police it was kinda nuts to be so close to an event like this.</p>
]]></description><pubDate>Sat, 20 Dec 2025 05:57:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=46333949</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=46333949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46333949</guid></item><item><title><![CDATA[New comment by ewild in "ChatGPT Pulse"]]></title><description><![CDATA[
<p>Soo probabilistic biases.</p>
]]></description><pubDate>Thu, 25 Sep 2025 19:52:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45378067</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=45378067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45378067</guid></item><item><title><![CDATA[New comment by ewild in "My AI skeptic friends are all nuts"]]></title><description><![CDATA[
<p>i have not written about it due to me being a silent partner in the company and dont want my name publicly attatched to it, but the code and prompts is all more like talking to a buddy is how i use it, i ask it to build specific things then i look through and make changes. For instance a few examples i can give is there is a lot of graph traversal in my data i built, I'm not an expert on graph traversal, so I researched what would be a good algo for my type of data, and then utilized claude to implement the papers algorithm into my code and data structures. I dont have the llm in any steps that the customer interact with (there is some fuzzy stuff but nothing consistently run) but i would say an llm has touched over 90% of the code i wrote. its just an upgraded rubber ducky to me.<p>If i wasn't experienced in computer science this would all fall apart however i do have to fix almost all the code, but spending 10 mins fixing something is better than 3 days figuring it out in the first place (again this might be more unique to my coding and learning style)</p>
]]></description><pubDate>Tue, 03 Jun 2025 03:01:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=44165859</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=44165859</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44165859</guid></item><item><title><![CDATA[New comment by ewild in "My AI skeptic friends are all nuts"]]></title><description><![CDATA[
<p>i mean i can state that i built a company wihtin the last year where id say 95% of my code involved using an LLM. I am an experienced dev so yes it makes mistakes and it requires my expertise to be sure the code works and to fix subtle bugs; however, i built this company me and 2 others in about 7 months for what wouldve easily taken me 3 years without the aid of LLMs. Is that an indictment of my ability? maybe, but we are doing quite well for ourselves at 3M arr already on only 200k expense.</p>
]]></description><pubDate>Mon, 02 Jun 2025 22:37:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=44163976</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=44163976</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44163976</guid></item><item><title><![CDATA[New comment by ewild in "TikTok goes dark in the US"]]></title><description><![CDATA[
<p>to me this is the only important one. Not only can they subtely influence the entire us culture, if they were to get in trouble for it, then what? the US doesnt have any influence over them we would just ban them and at that point its too late. realistically it already is too late. a huge point imo aswell is we ARENT at war right now, but if we are at war the amount of information china can both push and obtain through tiktok would be large enough to change the tides of a war</p>
]]></description><pubDate>Sun, 19 Jan 2025 05:16:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=42753910</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=42753910</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42753910</guid></item><item><title><![CDATA[New comment by ewild in "Don't use cosine similarity carelessly"]]></title><description><![CDATA[
<p>the original chunk is most likely stored with it in referential format such as an id in the metadata to pull from a DB or something along those lines. I do exactly what he does aswell and i have an Id metadata value that does exactly that pointing to an id in a DB which holds the text chunks and their respective metadata</p>
]]></description><pubDate>Wed, 15 Jan 2025 05:32:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=42707651</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=42707651</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42707651</guid></item><item><title><![CDATA[New comment by ewild in "Nvidia announces next-gen RTX 5090 and RTX 5080 GPUs"]]></title><description><![CDATA[
<p>it is absurdly easy to get a 5090 on launch. ive gotten their flagship from their website FE every single launch without fail. from 2080 to 3090 to 4090</p>
]]></description><pubDate>Tue, 07 Jan 2025 20:56:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=42627362</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=42627362</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42627362</guid></item><item><title><![CDATA[New comment by ewild in "Bend: a high-level language that runs on GPUs (via HVM2)"]]></title><description><![CDATA[
<p>the irony in you blasting all over this thread is that you dont know how it even works. You have 0 idea if their claims of scaling linearly are causing bottlenecks in other places as you state, if you read actual docs on this its clear that the actaul "compiler" part of the compiler was put on the backburner while the parallellization was figured out and as that is now done a bunch of optimizations will come in the next year</p>
]]></description><pubDate>Sat, 18 May 2024 00:27:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=40395492</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40395492</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40395492</guid></item><item><title><![CDATA[New comment by ewild in "GPT-4o"]]></title><description><![CDATA[
<p>people like you are the problem. the people who join a website cause it to be shitty, then leave and start the process at a new website. Reddit didnt become shit because of Reddit it became shit because of people going on there commenting as if they themselves are an LLM repeating enshittification over and over and trying to say the big buzzword first so they get to the top denying any real conversation.</p>
]]></description><pubDate>Mon, 13 May 2024 20:59:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=40348347</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40348347</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40348347</guid></item><item><title><![CDATA[New comment by ewild in "ScrapeGraphAI: Web scraping using LLM and direct graph logic"]]></title><description><![CDATA[
<p>in this case we had 1.5 millioon ground truths for our testing purposes. we now have run it over 10 million, but i didnt want to claim it had 0 hallucinations on those as technically we cant say for sure, but considering the hallucination rate was 0% for 1.5 million when compared to ground truths im fairly confident.</p>
]]></description><pubDate>Wed, 08 May 2024 14:36:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=40298591</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40298591</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40298591</guid></item><item><title><![CDATA[New comment by ewild in "ScrapeGraphAI: Web scraping using LLM and direct graph logic"]]></title><description><![CDATA[
<p>the 1.5 million was our test set. we had 1.5 million ground truths, and it didnt make up fake data for a single one</p>
]]></description><pubDate>Wed, 08 May 2024 14:35:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=40298569</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40298569</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40298569</guid></item><item><title><![CDATA[New comment by ewild in "ScrapeGraphAI: Web scraping using LLM and direct graph logic"]]></title><description><![CDATA[
<p>At my job we are scraping using LLMs. For a 10M sector of the company. GPT4 turbo has never not once out of 1.5 million API requests hallucinated. We however use it to parse data and interpret it from webpages, this is something you wouldn't be able to do with a regular scraper. Not well atleast.</p>
]]></description><pubDate>Wed, 08 May 2024 03:37:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=40294093</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40294093</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40294093</guid></item><item><title><![CDATA[New comment by ewild in "Show HN: CTRL-F for YouTube Videos"]]></title><description><![CDATA[
<p>The model.pth is a custom LSTM for detecting phonetic similarity, as long as you're running it from the pythons folder ( I didn't manage file location very well) it should work.</p>
]]></description><pubDate>Sun, 14 Apr 2024 12:31:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=40030666</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40030666</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40030666</guid></item><item><title><![CDATA[New comment by ewild in "Show HN: CTRL-F for YouTube Videos"]]></title><description><![CDATA[
<p>damn i see this after im 90% done and just have to make a fancy button lol</p>
]]></description><pubDate>Sat, 13 Apr 2024 22:38:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=40026823</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40026823</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40026823</guid></item><item><title><![CDATA[New comment by ewild in "Show HN: CTRL-F for YouTube Videos"]]></title><description><![CDATA[
<p>if you have any questions feel free to ask!</p>
]]></description><pubDate>Sat, 13 Apr 2024 22:04:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=40026579</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40026579</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40026579</guid></item><item><title><![CDATA[New comment by ewild in "Show HN: CTRL-F for YouTube Videos"]]></title><description><![CDATA[
<p>would you prefer if the timestamp was hidden since it takes up a bigass portion of the screen or that being an option to hide it in the extension settings?</p>
]]></description><pubDate>Sat, 13 Apr 2024 21:52:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=40026497</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40026497</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40026497</guid></item><item><title><![CDATA[New comment by ewild in "Show HN: CTRL-F for YouTube Videos"]]></title><description><![CDATA[
<p>i guess i might aswell do it so i dont need to run a model everytime myself too lol ill have it done in a day or two</p>
]]></description><pubDate>Sat, 13 Apr 2024 21:13:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=40026212</link><dc:creator>ewild</dc:creator><comments>https://news.ycombinator.com/item?id=40026212</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40026212</guid></item></channel></rss>