<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: appenz</title><link>https://news.ycombinator.com/user?id=appenz</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 06 May 2026 08:25:10 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=appenz" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by appenz in "Gemini 2.5 Flash Image"]]></title><description><![CDATA[
<p>I tested it against Flux Pro Kontext (also image editing) and while it's a very different style and approach I overall like Flux better. More focus on image consistency, adjusts the lighting correctly, fixes contradictions in the image.</p>
]]></description><pubDate>Tue, 26 Aug 2025 16:12:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=45028535</link><dc:creator>appenz</dc:creator><comments>https://news.ycombinator.com/item?id=45028535</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45028535</guid></item><item><title><![CDATA[New comment by appenz in "Long Live the 'GPU Poor' – Open-Source AI Grants"]]></title><description><![CDATA[
<p>No deeper reason. I think there just a lot in LLMs happening right now which skewed it towards them. We would love to do something in the SD ecosystem.</p>
]]></description><pubDate>Wed, 30 Aug 2023 18:07:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=37326337</link><dc:creator>appenz</dc:creator><comments>https://news.ycombinator.com/item?id=37326337</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37326337</guid></item><item><title><![CDATA[New comment by appenz in "Long Live the 'GPU Poor' – Open-Source AI Grants"]]></title><description><![CDATA[
<p>We 100% agree!</p>
]]></description><pubDate>Wed, 30 Aug 2023 18:05:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=37326306</link><dc:creator>appenz</dc:creator><comments>https://news.ycombinator.com/item?id=37326306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37326306</guid></item><item><title><![CDATA[New comment by appenz in "Pinecone raises $100M Series B"]]></title><description><![CDATA[
<p>Chat history may work, it depends on how long it is and the business model.<p>I don't quite understand how general summarization would work. If you use an LLM to simply to summarize in order to feed it into a prompt, the summarization needs to be specific to the query. i.e. "summarize what this text says about topic X". You can't summarize long text in a generic way without losing information. Or do I misunderstand the comment?<p>If you have a perfect table of context (or better, an index by topic) you may not need semantic search. But for the typical use case we are seeing you have unstructured data without an index (e.g. tech support knowledge db entries, company reports, emails). For that, semantic search work quite well.<p>For the sizes, the observation is that the data that people want to search over (e.g. your email, a wiki, JIRA, a knowledge base) is far larger than the context length. You are correct that we assume that inference cost and speed won't decrease sufficiently quickly in the near future. Why is a longer topic, but in a nutshell GPU speed increase is ~2.5x gen/gen and other than overtraining vs. Chinchilla we don't see immediate model gains. But that is speculative, we don't know what's in store.<p>To some degree we are just reacting to user adoption in the market. We don't build these systems, but if we see enough of them eventually we recognize the pattern. And while I am optimistic, we could be wrong. AI is major revolution and we are all students.<p>edit: disclaimer, I work for a16z.</p>
]]></description><pubDate>Fri, 28 Apr 2023 00:23:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=35736152</link><dc:creator>appenz</dc:creator><comments>https://news.ycombinator.com/item?id=35736152</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35736152</guid></item><item><title><![CDATA[New comment by appenz in "Pinecone raises $100M Series B"]]></title><description><![CDATA[
<p>Summarization is <i>much</i> more expensive than vector db's. Assume you have 1m tokens of context. You could run all through GPT-4 and summarize the information, but it would cost $60 (based on current prices) and take 10's of minutes of GPU time to do the inference.<p>Disclaimer: I work for a16z and on the infra team, so consider me biassed.</p>
]]></description><pubDate>Thu, 27 Apr 2023 22:54:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=35735539</link><dc:creator>appenz</dc:creator><comments>https://news.ycombinator.com/item?id=35735539</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35735539</guid></item><item><title><![CDATA[New comment by appenz in "School vs. Wikipedia"]]></title><description><![CDATA[
<p>Wrong school? Our local schools (Silicon Valley, CA) encourage kids to use Wikipedia. Your school may just be a little behind the times.<p>The discussion now has moved to NLP models. GPT-3 models at this point can generate extremely high quality answers to complex questions. Is there still a point in asking a student to write a few paragraph on the definition and effect of acid rain if you can get that from OpenAI within seconds?</p>
]]></description><pubDate>Fri, 07 Oct 2022 16:30:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=33123544</link><dc:creator>appenz</dc:creator><comments>https://news.ycombinator.com/item?id=33123544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33123544</guid></item></channel></rss>