<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: WhiteNoiz3</title><link>https://news.ycombinator.com/user?id=WhiteNoiz3</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 28 Apr 2026 18:10:12 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=WhiteNoiz3" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by WhiteNoiz3 in "All AI Videos Are Harmful (2025)"]]></title><description><![CDATA[
<p>AI (as it currently exists) will never be "creative" in the sense that it can only imitate or interpolate between past works. At some point we'll recognize that AI generated works are boring and predictable. AI will probably never invent a new musical genre or artform since it can only reproduce or recombine works from the past. I wonder what happens when the internet is so full of AI generated slop that that the only things worth "training" on were made in the time before AI generation became a thing. Will AI generations be full of dated references to a time gone by?<p>People may say it increases creativity but I see it more as lowering the bar to produce things. The same could be said for a lot of inventions like photography made producing images easier and I'm sure a lot of portrait painters lost their jobs to photographers. I think the danger is that we may see a very rapid erosion of jobs in the creative space that won't be easy to transition into new fields which I feel will have a detrimental effect on our society.</p>
]]></description><pubDate>Mon, 05 Jan 2026 20:19:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=46504295</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=46504295</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46504295</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Building an internal agent: Code-driven vs. LLM-driven workflows"]]></title><description><![CDATA[
<p>I'm struggling to understand why an LLM even needs to be involved in this at all. Can't you write a script that takes the last 10 slack messages and checks the github status for any URLs and adds an emoji? It could be a script or slack bot and it would work far more reliably and cost nothing in LLM calls. IMO it seems far more efficient to have an LLM write a repeatable workflow once than calling an LLM every time.</p>
]]></description><pubDate>Fri, 02 Jan 2026 01:23:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=46460306</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=46460306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46460306</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Meta Segment Anything Model 3"]]></title><description><![CDATA[
<p>The models it creates are gaussian splats, so if you are looking for traditional meshes you'd need a tool that can create meshes from splats.</p>
]]></description><pubDate>Wed, 19 Nov 2025 23:10:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=45986587</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=45986587</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45986587</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Meta Ray-Ban Display"]]></title><description><![CDATA[
<p>> Valve actually tried with the first Half-Life game in a decade, and even that didn't work.<p>Half Life Alyx is still considered to be one of the best VR games ever made and one that is still consistently recommended to new users even years after release. IMO people buy hardware because of the exclusive content. If a standard game console came out and it only had one AAA game on it, I probably wouldn't bother buying it. But if there were 3-4 games that looked really interesting it starts to look more worth the investment. Playing VR games takes a lot of committment (time / physical space / $$$) so the payoff has to be worth it or you'll lose people. With the huge amount of money spent on R&D for new hardware I think it's a valid argument to say that maybe funding content would have been a better investment in terms of ensuring platform growth.<p>Also, side note but not every game requires free motion. Plenty of hits had no movement or teleport etc. A lot of these were completely new (sub-)genres that didn't exist or hit the same as they would in a traditional pancake game. Plus lots of kids seem unaffected by free movement (maybe as high as 50% of users by my rough estimate).</p>
]]></description><pubDate>Thu, 18 Sep 2025 11:28:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=45288312</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=45288312</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45288312</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "You Have to Feel It"]]></title><description><![CDATA[
<p>Another way to look at it is parallel processing vs sequential processing.. our brains can make a judgement call about a thousand subtle variables and data points that we can't exactly put our fingers on unless we really dig into it, which we usually label as 'feelings', using the parallel part of our brain. The sequential (logical) part can only consider a limited number of variables at a time. I don't think either mode of thinking is inherently worse (we need both), but in our society the feelings part has traditionally been discounted as being 'illogical' by academics.. I think AI has shown us that parallel processing is actually incredibly important to thinking.<p>But back to the original post, I think 'having good taste' and knowing when something feels like the right solution is one of those hard to define qualities that can make the difference between average and great products (and has far reaching effects in any business).</p>
]]></description><pubDate>Sun, 31 Aug 2025 13:37:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=45083072</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=45083072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45083072</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "No elephants: Breakthroughs in image generation"]]></title><description><![CDATA[
<p>I haven't see any details on how OpenAI's model works, but the tokens it generates aren't directly translated into pixels - those tokens are probably fed into a diffusion process which generates the actual image.. The tokens are the latent space or conditioning for the actual image generation process.</p>
]]></description><pubDate>Tue, 08 Apr 2025 11:19:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=43620390</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=43620390</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43620390</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "How I program with LLMs"]]></title><description><![CDATA[
<p>A better way to phrase it might be don't use it for something that you aren't able to verify or validate.</p>
]]></description><pubDate>Tue, 07 Jan 2025 03:27:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=42618845</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=42618845</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42618845</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Show HN: Convolution Solver and Visualizer"]]></title><description><![CDATA[
<p>Thanks, this is useful</p>
]]></description><pubDate>Thu, 21 Nov 2024 15:56:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=42205581</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=42205581</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42205581</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Gen AI will increase demand for software engineers"]]></title><description><![CDATA[
<p>He's not arguing that no jobs will be displaced, he's arguing that jobs will change, engineering may become more reliable, new types of software jobs may be created.</p>
]]></description><pubDate>Fri, 14 Jun 2024 12:31:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=40680267</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=40680267</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40680267</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Gen AI will increase demand for software engineers"]]></title><description><![CDATA[
<p>It literally says "my personal blog" at the top</p>
]]></description><pubDate>Fri, 14 Jun 2024 12:29:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=40680259</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=40680259</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40680259</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Slack AI Training with Customer Data"]]></title><description><![CDATA[
<p>Agreed, and that is my concern as well that if people get too comfortable with it then companies will keep pushing the bounds of what is acceptable. We will need companies to be transparent about ALL the things they are using our data for.</p>
]]></description><pubDate>Fri, 17 May 2024 14:36:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=40390421</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=40390421</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40390421</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Slack AI Training with Customer Data"]]></title><description><![CDATA[
<p>From the wording, it sounds like they are conscious of the potential for data leakage and have taken steps to avoid it. It really depends on how they are applying AI/ML. It can be done in a private way if you are thoughtful about how you do it. For example:<p>Their channel recommendations:
"We use external models (not trained on Slack messages) to evaluate topic similarity, outputting numerical scores. Our global model only makes recommendations based on these numerical scores and non-Customer Data"<p>Meaning they use a non-slack trained model to generate embeddings for search. Then they apply a recommender system (which is mostly ML not an LLM). This sounds like it can be kept private.<p>Search results:
"We do this based on historical search results and previous engagements without learning from the underlying text of the search query, result, or proxy"
Again, this is probably a combination of non-slack trained embeddings with machine learning algos based on engagement. This sounds like it can be kept private and team specific.<p>autocomplete:
"These suggestions are local and sourced from common public message phrases in the user’s workspace."
I would be concerned about private messages being leaked via autocomplete, but if it's based on public messages specific to your team, that should be ok?<p>Emoji suggestions:
"using the content and sentiment of the message, the historic usage of the emoji [in your team]"
Again, it sounds like they are using models for sentiment analysis (which they probably didn't train themselves and even if they did, don't really leak any training data) and some ML or other algos to pick common emojis specific to your team.<p>To me these are all standard applications of NLP / ML that have been around for a long time.</p>
]]></description><pubDate>Fri, 17 May 2024 14:35:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=40390409</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=40390409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40390409</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Slack AI Training with Customer Data"]]></title><description><![CDATA[
<p>To add some nuance to this conversation, what they are using this for is Channel recommendations, Search results, Autocomplete, and Emoji suggestion and the model(s) they train are specific to your workspace (not shared between workspaces). All of which seem like they could be handled fairly privately using some sort of vector (embeddings) search.<p>I am not defending Slack, and I can think of number of cases where training on slack messages could go very badly (ie, exposing private conversations, data leakage between workspaces, etc), but I think it helps to understand the context before reacting. Personally, I do think we need better controls over how our data is used and slack should be able to do better than "Email us to opt out".</p>
]]></description><pubDate>Fri, 17 May 2024 11:00:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=40388530</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=40388530</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40388530</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Player-Driven Emergence in LLM-Driven Game Narrative"]]></title><description><![CDATA[
<p>A lot of the dialogue that OpenAI's models write is incredibly bland.. I really think we'll need less censored models trained on how to act different roles other than just 'super safe and friendly assistant'.</p>
]]></description><pubDate>Fri, 10 May 2024 16:59:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=40321229</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=40321229</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40321229</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Ask HN: Any advice on navigating this job market or pivoting out of tech? (US)"]]></title><description><![CDATA[
<p>I wouldn't count yourself out just because you are a junior. Companies love hiring junior devs because their salaries are the lowest. It's senior devs that should be worried.<p>I've yet to see any programming jobs replaced due to AI. Writing code is only half the job; understanding a company's existing tech stack, API's, internal struggles, business needs, etc, and figuring out how to implement what the business wants are all important. AI will still have a hard time with these things for a while yet.</p>
]]></description><pubDate>Sun, 03 Mar 2024 02:05:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=39577751</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=39577751</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39577751</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Mixed Reality gone in Windows 11 Insider Preview Build 26052"]]></title><description><![CDATA[
<p>OP was arguing that it was decreasing. Steam is a bad source of info for a lot of reasons. Meta is the largest platform and they don't release numbers. My personal feeling (Apple Vision Pro hype not withstanding) as a VR developer is that the industry is in the slow growth phase and will continue to grow as real usecases are found and hardware continues to improve year over year.</p>
]]></description><pubDate>Sat, 10 Feb 2024 16:45:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=39327674</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=39327674</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39327674</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Mixed Reality gone in Windows 11 Insider Preview Build 26052"]]></title><description><![CDATA[
<p>Your link is from 2023.. Current numbers:<p>Steam users with VR Headsets: 2.24% (vs 2.07% last feb)<p>See <a href="https://store.steampowered.com/hwsurvey" rel="nofollow">https://store.steampowered.com/hwsurvey</a> (VR section)</p>
]]></description><pubDate>Sat, 10 Feb 2024 12:53:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=39325837</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=39325837</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=39325837</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Things are about to get worse for generative AI"]]></title><description><![CDATA[
<p>I agree with this too.. AI is only going to exacerbate the signal to noise problem on the web.</p>
]]></description><pubDate>Sat, 30 Dec 2023 21:09:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=38819178</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=38819178</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38819178</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Things are about to get worse for generative AI"]]></title><description><![CDATA[
<p>> Except these aren't databases, so that's generally not possible<p>Not directly and not in every case, but it IS possible to use embeddings to link to similar material. People are doing it pretty commonly using the RAG approach and Bard is already providing sources, etc. It may not be perfect, but the onus is on the AI companies to figure out how to do it right not just claim helplessness.<p>> Okay... but then, if I write a book should I be able to opt out of you being allowed to read it? What conditions should I be able to put on who can read my work?<p>Sites that don't want to appear in search results or have sensitive info they don't want to get into search engines can use the Robots.txt which is as old as the internet. There are many valid reasons to have mechanisms to prevent something from being included in training data, and I would also argue this is a core feature that is necessary to spur adoption by businesses as we've already seen. Otherwise, I am not sure I understand your reasoning.. people can publish websites and opt to have them excluded from search, the same should apply to AI.</p>
]]></description><pubDate>Sat, 30 Dec 2023 21:06:20 +0000</pubDate><link>https://news.ycombinator.com/item?id=38819135</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=38819135</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38819135</guid></item><item><title><![CDATA[New comment by WhiteNoiz3 in "Things are about to get worse for generative AI"]]></title><description><![CDATA[
<p>IMO, this is probably the goal of the NYTimes lawsuits as well</p>
]]></description><pubDate>Sat, 30 Dec 2023 15:12:09 +0000</pubDate><link>https://news.ycombinator.com/item?id=38815806</link><dc:creator>WhiteNoiz3</dc:creator><comments>https://news.ycombinator.com/item?id=38815806</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38815806</guid></item></channel></rss>