<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: hooande</title><link>https://news.ycombinator.com/user?id=hooande</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 08:11:46 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=hooande" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by hooande in "Paying off people's medical debt has little impact on their lives, study finds"]]></title><description><![CDATA[
<p>Medical debt is different. The legal system frowns on people running up credit card debt to pay for PS5s or nice vacations with no intention of ever paying it back. That's tantamount to theft. Most medical debt is involuntary and necessary to survive. It doesn't make sense for it to have the same penalties as other forms of credit.<p>In general in the US, life saving or emergency medical care is administered without regard for the patient's ability to pay. Hospitals are already subsidized or compensated in various ways for this. The real issue is preventative or precautionary care. If Americans had that for free, like with the NHS, there would be fewer $XXX,XXX debts later in life.</p>
]]></description><pubDate>Mon, 08 Apr 2024 14:59:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=39970457</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=39970457</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39970457</guid></item><item><title><![CDATA[New comment by hooande in "More Agents Is All You Need: LLMs performance scales with the number of agents"]]></title><description><![CDATA[
<p>LLMs were specifically trained to emulate human interaction patterns. Of course we sound like them at times. It's the things we can do that they can't that are relevant.<p>If I study Einstein and learn to do a really good impression, the statement "Einstein often sounds like karmacondon" will be true. That does not make me Einstein.</p>
]]></description><pubDate>Sun, 07 Apr 2024 03:20:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=39957903</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=39957903</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39957903</guid></item><item><title><![CDATA[New comment by hooande in "You won't find a technical co-founder"]]></title><description><![CDATA[
<p>In my experience a non technical founder should have one of the following:<p>1) Access to capital, normally through family or a family friend. I've worked at several companies where the main thing the CEO brought to the table was that someone trusted him or her enough to invest millions of dollars.<p>2) At least 5 years working a 9-5 job in the target industry and the associated social connections and experience. This eliminates most college students, sadly.<p>3) Something <i>unique</i> that enables the execution of the idea. This is normally a relationship or insider knowledge. The answer to "Why you?" can't be "Because I had the idea".<p>The most common exception I see to this list is when both founders have the same level of passion for solving a given problem. If you have to explain the opportunity and get someone else interested in it, it could be a tough road.<p>That said, don't lose hope. It's a big world. People meet and things happen.</p>
]]></description><pubDate>Tue, 02 Apr 2024 13:11:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=39905353</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=39905353</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39905353</guid></item><item><title><![CDATA[New comment by hooande in "Sora: Creating video from text"]]></title><description><![CDATA[
<p>This really seems like "DALL-E", but for videos. I can make cool/funny videos for my friends, but after a while the novelty wears off.<p>All of the AI generated media has this quality where I can immediately tell that it's ai, and that becomes my dominant thought. I see these things on social media and think "oh, another ai pic" and keep scrolling. I've yet to be confused about whether something is ai generated or real for more than several seconds.<p>Consistency and continuity still seem to be a major issues. It would be very difficult to tell a story using Sora because details and the overall style would change from scene to scene. This is also true of the newest image models.<p>Many people think that Sora is the second coming, and I hope it turns out to have a major impact on all of our lives. But right now it's looking to have about the same impact that DALL-E has had so far.</p>
]]></description><pubDate>Thu, 15 Feb 2024 20:19:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=39388183</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=39388183</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39388183</guid></item><item><title><![CDATA[New comment by hooande in "Why is machine learning 'hard'? (2016)"]]></title><description><![CDATA[
<p>Debugging is a problem. But the real problem I'm seeing is our expectations as software developers. We're used to being able to fix any problem that we see. If a div is misaligned or a column of numbers is wrong we can open the file, find the offending lines of code and FIX it.<p>Machine learning is different because every implementation has a known error rate. If your application has a measured 80% accuracy then 20% of cases WILL have an error. You don't know which 20% and you don't get to choose. There's no way to notice a problem and immediately fix it, like you can with almost every other kind of engineering. At best you can expand your dataset, incorporate new models, fix actual bugs in the code. Doing those things could increase the accuracy up to, say, 85%. This means there will be fewer errors overall, but the one that you happened to notice may or may not still be there. There's no way to directly intervene.<p>I see a lot of people who are new to the field struggle with this. There are many ways to improve models and handle edge cases. But not being able to fix a problem that's in front of you takes some getting used to.</p>
]]></description><pubDate>Wed, 24 Jan 2024 05:43:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=39114036</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=39114036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39114036</guid></item><item><title><![CDATA[New comment by hooande in "OpenAI's employees were given two explanations for why Sam Altman was fired"]]></title><description><![CDATA[
<p>Alternative theory: ChatGPT was a runaway hit product that sucked up a lot of the organization's resources and energy. Sam and Greg wanted to roll with it and others on the board did not. They voted on it and one side won.<p>There isn't a bigger, more interesting story here. This is in fact a very common story that plays out at many software companies. The board of openai ended up making a decision that destroyed billions of dollars worth of brand value and good will. That's all there is to it.</p>
]]></description><pubDate>Tue, 21 Nov 2023 06:58:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=38360307</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=38360307</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38360307</guid></item><item><title><![CDATA[New comment by hooande in "OpenAI's employees were given two explanations for why Sam Altman was fired"]]></title><description><![CDATA[
<p>Why wouldn't Ilya come out and say this? Why wouldn't any of the other people who witnessed the software behave in an unexpected way say something?<p>I get that this is a "just for fun" hypothesis, which is why I have just for fun questions like what incentive does anyone have to keep clearly observed ai risk a secret during such a public situation?</p>
]]></description><pubDate>Tue, 21 Nov 2023 04:24:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=38359210</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=38359210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38359210</guid></item><item><title><![CDATA[New comment by hooande in "OpenAI staff threaten to quit unless board resigns"]]></title><description><![CDATA[
<p>For real. It's like, did you see Oppenheimer? There's a reason they put the military in charge of that.</p>
]]></description><pubDate>Mon, 20 Nov 2023 21:48:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=38355378</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=38355378</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38355378</guid></item><item><title><![CDATA[New comment by hooande in "OpenAI staff threaten to quit unless board resigns"]]></title><description><![CDATA[
<p>This is one of the most insightful comments I've seen on this whole situation.</p>
]]></description><pubDate>Mon, 20 Nov 2023 21:40:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=38355252</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=38355252</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38355252</guid></item><item><title><![CDATA[New comment by hooande in "OpenAI staff threaten to quit unless board resigns"]]></title><description><![CDATA[
<p>The people working there would know if they were getting close to AGI. They wouldn't be so willing to quit, or to jeopardize civilization altering technology, for the sake of one person. This looks like normal people working on normal things, who really like their CEO.</p>
]]></description><pubDate>Mon, 20 Nov 2023 21:08:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=38354792</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=38354792</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38354792</guid></item><item><title><![CDATA[New comment by hooande in "Sam Altman, Greg Brockman and others to join Microsoft"]]></title><description><![CDATA[
<p>OpenAI existed for years before ChatGPT. Granted, at much smaller size and with hundreds fewer employees.<p>I imagine that the board wants to go back to that or something like it.</p>
]]></description><pubDate>Mon, 20 Nov 2023 09:43:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=38345529</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=38345529</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38345529</guid></item><item><title><![CDATA[New comment by hooande in "Emmett Shear becomes interim OpenAI CEO as Altman talks break down"]]></title><description><![CDATA[
<p>If this were true they never would have had talks to bring him back. That's the opposite of steadfast commitment to principles. If Sam wronged them or the company in a significant way they never should have let him back in the building.<p>The board's decisions may or may not turn out to be correct in hindsight. But it's very difficult to say that this was a good example of leadership or decision making.</p>
]]></description><pubDate>Mon, 20 Nov 2023 07:04:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=38343704</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=38343704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38343704</guid></item><item><title><![CDATA[New comment by hooande in "On the 10th anniversary of the Snowden revelations"]]></title><description><![CDATA[
<p>In all these years I'd never seen this. Ironic, but not surprising, that according to this account Snowden did exactly what he accused the US government of doing: mass collecting data with no authorization or purpose and then using it to accuse someone he disagreed with of crimes.</p>
]]></description><pubDate>Wed, 06 Sep 2023 05:26:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=37401600</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=37401600</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37401600</guid></item><item><title><![CDATA[New comment by hooande in "Frequent and infrequent users of social media respond differently to rewards"]]></title><description><![CDATA[
<p>> These companies have hired focus groups, marketing experts, psychologists, and countless design teams to get people hooked on their platform. Are we really surprised?<p>I don't think that this is the problem. From what I know, hackernews doesn't hire any marketing experts, psychologists or focus groups. It doesn't even support images. I was more addicted to this site than any other, despite the lack of psychological tricks.<p>And there are many sites that DO employ full psychological warfare teams that I completely ignore. Tinder seemed fully committed to forcing repetitive user engagement. And I dropped that site after about two days, psychologists or not. If all it took to force engagement was a certain list of UI tricks, every funded social site would be able to do it.<p>I don't think there's a single root cause or an off switch for social media. This phenomenon is here to stay, for better or worse. I think that we will adapt as a species but there's no going back.</p>
]]></description><pubDate>Mon, 28 Aug 2023 19:20:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=37299136</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=37299136</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37299136</guid></item><item><title><![CDATA[New comment by hooande in "AI Business Toolkit"]]></title><description><![CDATA[
<p>"You do not provide an API key. Please enter your openai key"<p>I'm not entering my openai key to a random website. My billing is tied to that.<p>This really makes it difficult to demo the product.</p>
]]></description><pubDate>Thu, 24 Aug 2023 18:11:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=37252520</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=37252520</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37252520</guid></item><item><title><![CDATA[New comment by hooande in "Ask HN: Tell us about your project that's not done yet but you want feedback on"]]></title><description><![CDATA[
<p>This idea seems like it's 100% about distribution. If I owned an extreme sports rental shop, I would have an incentive to get more people to go out and participate in the activities. If the app was well made, reputable and secure I would consider putting up a sign in my store or whatnot.<p>People that run equipment rental stores probably have a facebook group or professional association. If you can befriend someone influential in one of those, it might be a good place to get started.</p>
]]></description><pubDate>Thu, 17 Aug 2023 02:04:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=37156269</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=37156269</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37156269</guid></item><item><title><![CDATA[New comment by hooande in "Why host your own LLM?"]]></title><description><![CDATA[
<p>It's gotten better for everyone in the last few months. It used to be a nightmare, but I haven't seen a timeout or rate limit error in a long time.</p>
]]></description><pubDate>Tue, 15 Aug 2023 19:01:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=37137867</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=37137867</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37137867</guid></item><item><title><![CDATA[New comment by hooande in "Azure ChatGPT: Private and secure ChatGPT for internal enterprise use"]]></title><description><![CDATA[
<p>There is no clear answer. It's debatable among experts.<p>The grandparent post seems to believe that the issue is algorithmic complexity and programming aptitude. Personally, I think that all the major LLMs are using the same basic transformer architecture with relatively minor differences in code.<p>GPT is trained on more data with more parameters than any open source model. The size does matter, far more than the software does. In my experience with data science, the best programmers in the world can only do so much if they are operating with 1/10th the scale of data. That applies to any problem.</p>
]]></description><pubDate>Mon, 14 Aug 2023 02:35:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=37116843</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=37116843</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37116843</guid></item><item><title><![CDATA[New comment by hooande in "MetaGPT: Meta Programming for Multi-Agent Collaborative Framework"]]></title><description><![CDATA[
<p>The difference is that an LLM isn't 1000 different intelligences. It's one intelligence, being asked to pretend to be 1,000 different people. Every instance is the essentially the same weights trained on essentially the same data. The difference in perspective doesn't resemble that of the difference between any two humans.<p>Humans love to think of multi agent systems as being like a team of people. It's much more like a writer imagining different characters and how they would respond. When George RR Martin imagines all 500 characters in Game of Thrones, there is a lot of diversity of perspective and thought there. But all of that is coming from one intelligence and doesn't represent a collaboration in any traditional sense.</p>
]]></description><pubDate>Thu, 10 Aug 2023 18:17:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=37079748</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=37079748</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37079748</guid></item><item><title><![CDATA[New comment by hooande in "MetaGPT: Meta Programming for Multi-Agent Collaborative Framework"]]></title><description><![CDATA[
<p>The problem with this is that it has no memory across the different contexts. An analogy would be giving one page of a five page document to five different people, then taking it away and asking them to collaborate. While they can each give more attention to their individual page, none of them can see the whole picture and a lot of information will be lost when trying to communicate.<p>You can use multiple agents, or split a lot of information across multiple requests to one agent. The result is the same. Some problems require a full understanding of the whole picture.</p>
]]></description><pubDate>Thu, 10 Aug 2023 18:08:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=37079645</link><dc:creator>hooande</dc:creator><comments>https://news.ycombinator.com/item?id=37079645</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37079645</guid></item></channel></rss>