<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: MathYouF</title><link>https://news.ycombinator.com/user?id=MathYouF</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 17 Apr 2026 11:05:31 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=MathYouF" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by MathYouF in "This is a teenager"]]></title><description><![CDATA[
<p>"Bachelor's degrees have become essential for well-paid jobs in the US."<p>The lies we continue to allow ourselves to tell as a society.</p>
]]></description><pubDate>Wed, 17 Apr 2024 02:12:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=40059688</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=40059688</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40059688</guid></item><item><title><![CDATA[I wish there was a way to filter people here]]></title><description><![CDATA[
<p>I've noticed my twitter feed is better on average than my HN feed these days. For a brief moment I considered that the quality here is going down or that there it is going up (imagine! lol).<p>Then I realized it's all about my abiltiy to filter there.<p>There is a somewhat small but vocal and obvious contingency of HN posters who are just repeatedly arguing in bad faith or making low effort comments.<p>They are also often found discussing almost exclusively political topics and have very little to offer in terms of technical expertise.<p>I think my experience would be massively improved by being able to filter them out.<p>I see filtering as a kind of distributed community vote, and eventually if the majority of the community starts to filter out certain members, the net effect is like a distributed shadow ban. Those people won't be engaged with and will probably leave for somewhere they can get reactions.<p>I'm guessing this goes against HN culture and won't be accepted, but I thought it was worth discussing, because my ability to filter has left me with a pretty delightful experience on an app otherwise not known for them, so imagine what it could do here on HN.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=34229448">https://news.ycombinator.com/item?id=34229448</a></p>
<p>Points: 4</p>
<p># Comments: 2</p>
]]></description><pubDate>Tue, 03 Jan 2023 09:07:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=34229448</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=34229448</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34229448</guid></item><item><title><![CDATA[New comment by MathYouF in "Ask HN: What are your predictions for 2023?"]]></title><description><![CDATA[
<p>The real moment of truth will be if any models start to assist massively in research in the hard sciences.<p>Based on the quality of outputs I get when asking for help with somewhat complex AI research problems, I think it'll likely help accelerate the pace of other research as well, and discovery will be limited by people's speed of running the tests it suggests and feeding it back the results.</p>
]]></description><pubDate>Sun, 25 Dec 2022 16:09:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=34128358</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=34128358</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34128358</guid></item><item><title><![CDATA[New comment by MathYouF in "Womp 3D – The New Way to 3D"]]></title><description><![CDATA[
<p>The main value of using SDF shapes for 3D modeling workflows is you don't have to worry about topology (the vertex, edge, face graph structure which has to be formed over the surface of all 3D models) which makes a lot of modifiers (like boolean combinations of intersecting objects) vastly less tedious (Womp calls this feature "goop").<p>Right now Blender work still involves a lot of tedium, mostly related to topology. A lot of upcoming 3D ML applications also work considerably better when using SDF instead of mesh representations. I wouldn't be surprised to see this form of 3D modeling take off to a significant degree because of those two factors.</p>
]]></description><pubDate>Sat, 05 Nov 2022 09:13:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=33479507</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=33479507</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33479507</guid></item><item><title><![CDATA[New comment by MathYouF in "An unwilling illustrator found herself turned into an AI model"]]></title><description><![CDATA[
<p>The cost of materials and transportation and time using the expensive CNC machine will be the major costs of sculpture. Generating the same quality 3D models is at the very furthest 18 months away. And animating and rigging the models and giving them auto-generated RL policies will surely come very quickly next.</p>
]]></description><pubDate>Tue, 01 Nov 2022 18:41:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=33425575</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=33425575</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33425575</guid></item><item><title><![CDATA[New comment by MathYouF in "An unwilling illustrator found herself turned into an AI model"]]></title><description><![CDATA[
<p>> I don't think AI will be able to replace human creativity for discovering new paradigms as fast as it will replace human application of existing paradigms. And by doing the latter really well with AI, we're killing our ability to do the former. We'll end up with a sterile art trajectory.<p>This may actually end up making the few artists creative enough to create bold new art styles even more valuable, if they can basically not release their art and hide it behind a model.<p>Though I guess anyone with access to that model's output could then just generate a few samples and train on those, so maybe not.</p>
]]></description><pubDate>Tue, 01 Nov 2022 18:38:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=33425511</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=33425511</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33425511</guid></item><item><title><![CDATA[New comment by MathYouF in "Ask HN: Navigating Startup Growing Pains Situation"]]></title><description><![CDATA[
<p>With your expertise and the fact that they're so early, you should be able to build a competing business with your superior product right?<p>Then you can keep all the rewards (and find out all they had to do to be in the position to be able to hire you).<p>Sounds like a win-win, you should be talking to startup accelerators not startups hiring.</p>
]]></description><pubDate>Wed, 26 Oct 2022 17:37:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=33346887</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=33346887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33346887</guid></item><item><title><![CDATA[New comment by MathYouF in "Against Algebra"]]></title><description><![CDATA[
<p>> Numeracy is just as valuable as literacy.<p>I actually think this is likely not true.<p>If you were to take an 18 year old who can't read, and another one who can't do addition, which do you think is less employable?<p>There's a lot of jobs one can do without any math. Almost none one can do without any reading.<p>There's more to it than all this of course, but I think literacy is the clear winner compared to numeracy.</p>
]]></description><pubDate>Mon, 17 Oct 2022 17:05:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=33236517</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=33236517</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33236517</guid></item><item><title><![CDATA[New comment by MathYouF in "The AI Scaling Hypothesis"]]></title><description><![CDATA[
<p>The tone of this betrays a possibly more argumentative  than collaborative conversation style than that which I may want to engage with further (as seems common I've noticed amongst anti-connectionists), but I did find one point intersting for discussion.<p>> Parameters are just the database of the system.<p>Would any equations parameters be considered just the database then? C in E=MC^2, 2 in a^2+b^2=c^2?<p>I suppose those numbers are basically a database, but the relationships (connections) they have to the other variables (inputs) represent a demonstrable truth about the universe.<p>To some degree every parameter in a nn is also representing some truth about the universe. How general and compact that representation is currently is not known (likely less than we'd like of both traits).</p>
]]></description><pubDate>Fri, 07 Oct 2022 17:30:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=33124428</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=33124428</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33124428</guid></item><item><title><![CDATA[New comment by MathYouF in "The AI Scaling Hypothesis"]]></title><description><![CDATA[
<p>If greater parameterization leads to memorization rather than generalisation it's likely a failure in our current architectures and loss formulations rather than an inherent benefit of "fewer parameters" improving generalizaiton. Other animals do not generalize better than humans despite having fewer neurons (or their generalizaitons betray a misunderstanding of the number and depth of subcategories there are for things, like when a dog barks at everything that passes by the window).</p>
]]></description><pubDate>Fri, 07 Oct 2022 17:20:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=33124279</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=33124279</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33124279</guid></item><item><title><![CDATA[New comment by MathYouF in "AI wins state fair art contest, annoys humans"]]></title><description><![CDATA[
<p>The team I work on is building tooling with exactly that in mind, making this a part of an artists workflow rather than any sort of replacement.</p>
]]></description><pubDate>Thu, 01 Sep 2022 08:08:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=32673443</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32673443</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32673443</guid></item><item><title><![CDATA[New comment by MathYouF in "Imagen: An AI system that creates photorealistic images from input text"]]></title><description><![CDATA[
<p>I know as much about how to get the best image outputs from text inputs as the person who designed an airport knows the best place to eat in it. The emergent properties of the system are a result of the data put into it, so I can only discuss the system itself, not what it ended up doing with the data in that system.<p>The models are a product of their datasets, specifically the relationship of the images and prompts via CLIP. CLIP puts both images and text into coordinate space, imagine just a 2D graph. It tries to assure that for any real image and its caption, they will each be each others closest neighbor in that coordinate space.<p>So if you want a certain image, you have to ask "what caption would be most likely and most uniquely given to the image I'm imagining".<p>I'm sure this advice is way less helpful than what you find in prompt engineering discord channels and guides I've seen.</p>
]]></description><pubDate>Thu, 25 Aug 2022 09:06:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=32591327</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32591327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32591327</guid></item><item><title><![CDATA[New comment by MathYouF in "Imagen: An AI system that creates photorealistic images from input text"]]></title><description><![CDATA[
<p>Pixel values are discrete (length x width x r256 x g256 x b256) and vertex values are continuous, so that is one major difference.<p>Secondly, there's vastly more labeled image data in the world than 3D data, so creating a CLMP (contrastive language and mesh pairing) model is harder.<p>It's very late but I may be able to give a much better answer on more of the nuances of 3D generation tomorrow.</p>
]]></description><pubDate>Thu, 25 Aug 2022 08:55:26 +0000</pubDate><link>https://news.ycombinator.com/item?id=32591267</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32591267</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32591267</guid></item><item><title><![CDATA[New comment by MathYouF in "Imagen: An AI system that creates photorealistic images from input text"]]></title><description><![CDATA[
<p>Like anyone deeply in a field I know maybe several thousand people who could probably give a better answer, but I figure I'll give an effort to provide one since I don't see any good ones posted yet.<p>The moment everyone knew this was going to be big was in 2019 when StyleGAN came out. They used a lot of tricks like aligning face features (like eyes) and had all their pictures of a single domain (the most famous being faces) but none the less, that was the moment everyone in the AI field knew this was going to be big, and so three years ago a lot of big people shifted to this line of research.<p>The four main innovations since then have been:<p>1. Transformers<p>Generalized computation kernels which allow for images to consider non-localised relationships between pixels of an image. Released in 2017, and originally used for language.<p>2. Pixel Patch Encodings<p>Different resolution semantic and geometric image information encodings which allow for better representations of relationships between image areas than pixels are able to achieve given the same compute. Allows using Transformers on high resolution images.<p>3. CLIP<p>Contrastive Language and Image Pairing. Before, the only way we knew to classify an image was as a "face" or "cat" or "ramen". When the genius idea of labeling images as semantically meaningful vectors rather than one hot encoded classes was revealed, it changed everything in computer vision very quickly, and problems that used to be hard became trivial. Released in 2021<p>4. Diffusion Models<p>GANs penalise you for making an image which does not seem to be part of an existing dataset. This encourages one to make the worst quality image that looks like a member of that dataset. Diffusion learns to denoise an image, removing noise is perceptually similar to increasing resolution, people like images that look that way. There may be more people with better intuition about diffusion models may be able to add on why they're superior. I've read all the papers leading up to the latest unCLIP (Dalle2) but it's complicated. Released in 2020, with major improvements to the training process continuously being made since then.<p>Hope this was helpful. All of the above were only implemented for images in any real way in the last three years. Putting them all together is something many people only just this year did, resulting in DallE, Stable Diffusion, and Imagen.<p>I'm working on doing this for 3D and later for use cases in AR. 3D generation still hasn't been cracked the same way image has but the above will likely contribute to the solution to that as well. Anyone who's intersted in working on that feel free to message me.</p>
]]></description><pubDate>Thu, 25 Aug 2022 06:56:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=32590482</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32590482</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32590482</guid></item><item><title><![CDATA[New comment by MathYouF in "Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise"]]></title><description><![CDATA[
<p>After an excellent all day workshop at CVPR this year explaining diffusion in more detail, it seemed pretty clear that any noise function could be used. I'm not sure if it should have been obvious, but I felt this paper coming from a mile away after seeing that.<p>I wonder to what degree certain parts of diffusion dictate using certain noise functions, and how much this paper truly challenges how we understand them. Cool to see it was researched.<p>Next idea: it seems like a lot of steps could be skipped by using things like momentum during the inference time. I'm sure OpenAI has already implemented several clever tricks like that in production for DallE.<p>I'm working on (various, non-diffusion) methods for 2D drawing to 3D output right now.</p>
]]></description><pubDate>Mon, 22 Aug 2022 07:51:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=32548531</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32548531</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32548531</guid></item><item><title><![CDATA[New comment by MathYouF in "Does my data fit in RAM?"]]></title><description><![CDATA[
<p>What kind of things are easier to debug with lots of RAM and how would you do it?</p>
]]></description><pubDate>Wed, 03 Aug 2022 01:14:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=32326495</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32326495</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32326495</guid></item><item><title><![CDATA[New comment by MathYouF in "A rant about the current state of ML (video, @52:10)"]]></title><description><![CDATA[
<p>I'd love a chance to talk to this guy (maybe at a rock-star after party at NeurIPS this year) because my view is:<p>1. "Either everything is magic or nothing is", and magical statistical tools are currently some of the coolest magic we have harnessed.<p>2. Making realistic pictures of "Corgi's with sushi" is cool.<p>3. Papers describing architectures which can make ever more realistic or interesting pictures of corgis and sushi are deserving of academic recognition, even if they can't precisely describe how, just as renderers which can do so would be.<p>I've found a lot of papers which primarily study the theory of ML coming out of the UK, A yet very few of them end up being of much value to the advancement of the field in terms of applications.<p>Conversely, the people focused on making real applications ("tools") seem to also be innovating on the science.<p>Tesla and Edison contributed massively to the progress of the study and application of electricity and both spent nearly no time in academia and instead focused on practical applications in industry. Edison's methods of investigation were said by Tesla (somewhat admiringly) to be entirely empirical. I wonder if the UK's strict academic approach to ML may be holding them back from making bigger contributions to the field. I'm glad some are trying to be strictly scientific about it, but I'm also glad it's not everyone, because it doesn't seem to be what's delivering the "magic" we all want.</p>
]]></description><pubDate>Fri, 22 Jul 2022 16:50:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=32194175</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32194175</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32194175</guid></item><item><title><![CDATA[New comment by MathYouF in "DataRobot employee resigns over stock sales"]]></title><description><![CDATA[
<p>My philosophy is if you think they're wrong and you don't need those people, then they have poor judgement, and you're better off not working with them in that case, and in fact could beat them by pursuing similar goals while not using that strategy.<p>And if they're right, well then what's there to be upset about?</p>
]]></description><pubDate>Sun, 17 Jul 2022 21:40:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=32131648</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32131648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32131648</guid></item><item><title><![CDATA[New comment by MathYouF in "Interactive course about “everyday” data science"]]></title><description><![CDATA[
<p>I think the popularity of my post suggests it better represents the general sentiment of people who viewed the page than not, but I do accept the tone reflects my negative feelings towards the perceived credentialsim:substance ratio more than it does an impartial review of the content. Genuinely, feel free to remove the entirely of my thread if you think it is unproductive, I'm unable to edit it.</p>
]]></description><pubDate>Sat, 16 Jul 2022 19:10:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=32120980</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32120980</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32120980</guid></item><item><title><![CDATA[New comment by MathYouF in "Interactive course about “everyday” data science"]]></title><description><![CDATA[
<p>I am indeed bitter (def: anger and disappointment at being treated unfairly; resentment) about people using their credentials rather than real merits of their work to attempt to get a leg up when advertising their work. I covered my dissatisfaction about that in my post.<p>Distill.pub for example, despite being written by people who worked at OpenAI and (also funded by) Google Brain, makes nearly no mention of either. They've managed to successfully accomplish their goal (creating resources to help people improve their intuition about machine learning) without resorting to those tactics.</p>
]]></description><pubDate>Sat, 16 Jul 2022 18:57:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=32120854</link><dc:creator>MathYouF</dc:creator><comments>https://news.ycombinator.com/item?id=32120854</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=32120854</guid></item></channel></rss>