<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gyom</title><link>https://news.ycombinator.com/user?id=gyom</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 07 Apr 2026 07:22:53 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gyom" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gyom in "Why did Dostoyevsky write Crime and Punishment?"]]></title><description><![CDATA[
<p>Same here. It's a bunch of facts, some context, and more of a rant about the question than any attempt at answering the question.<p>You could even write an article asking why that original article was even written, and it might make for more interesting content.</p>
]]></description><pubDate>Mon, 25 Oct 2021 12:23:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=28986980</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=28986980</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=28986980</guid></item><item><title><![CDATA[New comment by gyom in "Ideas in statistics that have powered AI"]]></title><description><![CDATA[
<p>You're right that this is spectacularly wrong.<p>I dare not even read the rest of the page just in case my brain accidentally absorbs other bad information like that paragraph about GANs.</p>
]]></description><pubDate>Wed, 07 Jul 2021 21:16:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=27765813</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=27765813</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27765813</guid></item><item><title><![CDATA[New comment by gyom in "Ideas in statistics that have powered AI"]]></title><description><![CDATA[
<p>Part of the cleverness of GANs was to have found a way to train a neural network that generates data without explicitly modeling the probability density.<p>In a stats textbook, when you know that your training data comes from a normal distribution, you can maximize the MLE wrt the parameters, and then use that for sampling. That's basic theory.<p>In practice, it was very hard to learn a good pdf for experimental data when you had a training set of images. GANs provided a way to bypass this.<p>Of course, people could have said "hey let's generate samples without maximizing a loglikelihood first", but they didn't know how to do it properly, how to train the network in any other way besides minimizing cross-entropy (which is equivalent to maximizing loglikelihood).<p>Then GANs actually provided a new loss function that could be trained. Total paradigm shift!</p>
]]></description><pubDate>Wed, 07 Jul 2021 21:12:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=27765782</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=27765782</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27765782</guid></item><item><title><![CDATA[New comment by gyom in "Rethinking the computer ‘desktop’ as a concept"]]></title><description><![CDATA[
<p>I also found that the best system is having the first layer of folder organization be "which period of my time is this from?".<p>Conceptually, it's easier to think of "music from high school" than about the specific mix of subgenres from my playlist back then. Same for documents that I saved. Those ICQ logs from high school are there. They don't belong in the same folder as the stuff I wrote yesterday, even though they could be of a similar nature.</p>
]]></description><pubDate>Wed, 02 Jun 2021 12:38:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=27367977</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=27367977</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27367977</guid></item><item><title><![CDATA[New comment by gyom in "Don't use third party auth to sign in"]]></title><description><![CDATA[
<p>You're setting a very high bar there, and then claiming that losing access to your gmail account isn't worse than that therefore it's not life changing.<p>Email ends up being the form of online identity for a lot of people, myself included, so that almost every service that I sign for has my email address as ID. If that email address isn't the ID, it's the preferred way of resetting passwords. I wouldn't be super happy about Facebook being my online ID, nor my cell phone number (see SIM swapping problems).<p>It's life changing in the same way that losing all your personal documents in a fire sets you up accounting nightmares. Moreover, you're making very light a situation about losing all your pictures. I'm not talking about food pictures, but there's plenty of "me" that's contained in being able to look at pictures of important events of my life (which is why I don't rely only on cloud backups for that).<p>I don't know what's "life changing" to you, then.</p>
]]></description><pubDate>Sat, 14 Nov 2020 15:22:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=25092788</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=25092788</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25092788</guid></item><item><title><![CDATA[New comment by gyom in "How Academia Resembles a Drug Gang (2013)"]]></title><description><![CDATA[
<p>I'm sorry that that was your PhD experience. It's a pity that enthusiastic students end up there, often because they're not given the right environment to thrive (e.g. a good lab and a supervisor that cares).<p>There isn't much to say to respond to that, apart that it seems to me that, in a parallel universe, you might have had a more fulfilling experience, or you might have cut your losses and walked away sooner.<p>That's the cruel aspect of the PhD. It really seems like a lot of important things are outside of one's control, especially when it comes to important factors in mental health. Nobody's starting a PhD with the goal of sinking hours into Reddit and Buffy because they feel awful about their PhD experience.</p>
]]></description><pubDate>Wed, 11 Nov 2020 13:24:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=25058143</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=25058143</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=25058143</guid></item><item><title><![CDATA[New comment by gyom in "James Randi Has Died"]]></title><description><![CDATA[
<p>Well, I guess he stopped doing it at some date before 2007. That's why he told an audience of 500 people about it. He sorta "cashed out" by making it a fun story about him being clever, instead of a last magic trick. He must have simply gotten tired of doing this every day and seeing all those cards in the waste basket (or burning them?).<p>If he had indeed tried to pull that trick in 2020, a lot of people would have remembered that he said he was setting it up.</p>
]]></description><pubDate>Sat, 24 Oct 2020 03:17:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=24876672</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=24876672</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24876672</guid></item><item><title><![CDATA[New comment by gyom in "James Randi Has Died"]]></title><description><![CDATA[
<p>I met James Randi around 2007 when he came to campus at UBC to give a talk. He told the following story about a magic trick that he worked on for years, which concerned guessing the timing of his death. I haven't heard him tell that story elsewhere, so now seems like a good time to share it.<p>He said that for a good number of years, every time before going to bed, he would write on a little card that he predicted he would die that night during his sleep. In the morning, he got up and happily threw away the little card. Every day. For many years. His concept was that, on the rare chance that this actually happened to be last day, people would think that he pulled the ultimate magic trick. People would not suspect that he wrote this on a little card every day because, well, nobody does that.</p>
]]></description><pubDate>Thu, 22 Oct 2020 01:28:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=24854072</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=24854072</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24854072</guid></item><item><title><![CDATA[New comment by gyom in "The software engineering lifecycle: How we built the new Dropbox Plus"]]></title><description><![CDATA[
<p>I have also had bad experiences with "Backup and Sync", which led me to abandon Google Drive right when I was seriously considering ditching Dropbox.<p>Given Google's reputation to ditch their own products, I guessed this was some side projects that some Googlers did, and it was never in Google's main strategy to allow people to sync their Google Drive to their local machines. Quite the opposite, actually.<p>My current gripe with Dropbox is that I'd like to basically be able to pay 4x the "Dropbox Plus" cost in order to get 4x the storage (without having to manage 4 separate accounts). Having 2TB isn't enough, but having "infinite" with Dropbox Business certainly is more than I want.</p>
]]></description><pubDate>Thu, 17 Sep 2020 02:57:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=24500709</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=24500709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24500709</guid></item><item><title><![CDATA[New comment by gyom in "DeepMind and Google: the battle to control artificial intelligence"]]></title><description><![CDATA[
<p>Nobody is going for absolute certainty here. That bar is too high in any conversation.<p>His point was mostly that, way before you achieve the kind of AGI portrayed in fiction, you'll have semi-intelligent interdependent systems that cause a lot of trouble due (like the kind that already happens to a lesser degree). Those are the ones that we should worry about right now.</p>
]]></description><pubDate>Thu, 14 Mar 2019 19:16:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=19392544</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=19392544</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=19392544</guid></item><item><title><![CDATA[New comment by gyom in "No Thanks, Google. I'll Speak for Myself"]]></title><description><![CDATA[
<p>I have a friend who was participating often in psychology experiments at Stanford, and he became familiar with the whole procedure of letting the subjects believe that they were interacting with another person via a computer, when in fact they were interacting with a program (makes everything more standard and easier to analyse).<p>One day he participated in this "split or share" kind of experiment, and he was ruthless. Nobody's emotions would be damaged by acting nasty and never sharing with the computer program.<p>Turns out, it probably was actually a real person who was behind on the other side. He saw some old woman crying, coming out of some adjacent room after the experiment was over.<p>So, yeah, different social conventions definitely apply.</p>
]]></description><pubDate>Wed, 09 May 2018 19:09:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=17032880</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=17032880</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17032880</guid></item><item><title><![CDATA[New comment by gyom in "No Thanks, Google. I'll Speak for Myself"]]></title><description><![CDATA[
<p>> Most people I know have moved on from using email for personal communication and only use it for business.<p>That experience completely differs from mine. Maybe I just prefer email to text or Facebook messages, but the same principle applies to wherever you are writing something to friend or family. Wouldn't you get the same problem, just elsewhere?<p>Responding to my mother's birthday wishes does seem like it's a different kind of activity than autocompleting C# code, even though both can be executed with autocomplete to get a good valid output.</p>
]]></description><pubDate>Wed, 09 May 2018 19:03:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=17032827</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=17032827</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=17032827</guid></item><item><title><![CDATA[New comment by gyom in "The Tyranny of Convenience"]]></title><description><![CDATA[
<p>The point would still stand even if Starbucks was indeed instantaneous. There's is something nice about being able to spend 2-5 minutes to prepare your own coffee. Sometimes you don't want to do it, but I think the point here is that if you have the convenient Starbucks alternative right in your face, you might "accidentally" surrender the pleasure of brewing your own coffee due to your laziness of the moment.</p>
]]></description><pubDate>Sun, 04 Mar 2018 16:07:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=16515157</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=16515157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16515157</guid></item><item><title><![CDATA[New comment by gyom in "Lessons from Optics, the Other Deep Learning"]]></title><description><![CDATA[
<p>One of the problems with coming up with a good theory is that, at the end of day, we're building a system that's particularly suited for a certain kind of patterns. If you're building a facial-recognition convnet, there is something about the dataset of faces that is going to influence what works and what doesn't.<p>When you're building digital circuits, they're expected not to care about what the bits mean, which patterns are more likely. It works for all possible inputs, with equal quality.<p>There are things in common with how you would process faces and how you would recognize other visual objects, and that's why there are design patterns such as "convolutional layers come before fully-connected layers".<p>In a way, the "no free lunch" theorem says that you are always paying a price when you specialize to a certain kind of patterns. It comes at the detriment to other patterns. So, any kind of stack of theories on ML/DL is going to be incomplete unless you say something about the nature of your data/patterns.<p>(That doesn't mean that we can't anything useful about DL, but it just puts a certain damper on those efforts.)</p>
]]></description><pubDate>Tue, 13 Feb 2018 15:16:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=16367717</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=16367717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16367717</guid></item><item><title><![CDATA[New comment by gyom in "Death by Derivatives"]]></title><description><![CDATA[
<p>Here is something to consider. Assume for a moment that people use rare Pokémon cards as a form of payment, because everyone plays Pokémon. You can buy a card with Pokémon cards. But these cards aren't being printed anymore and the supply never increases.<p>If I issue some kind of "IOU" certificate, redeemable for a rare Charizard card, and people trade those IOUs instead of redeeming them (good thing because I don't have those Charizard cards in my possession at the moment), then I'm basically expanding the supply "things that people trade and commonly use to pay for goods" (i.e. money/currency).<p>I didn't create any new Charizard card, but as long as people don't ask me to redeem them, it's roughly as though the market had 10 more copies of that card circulating.<p>There isn't more "value/wealth" created, but there is now more "money/currency" circulating. You can imagine how something similar is happening with derivatives.</p>
]]></description><pubDate>Thu, 11 Jan 2018 16:56:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=16125409</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=16125409</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16125409</guid></item><item><title><![CDATA[New comment by gyom in "Ashamed to work in Silicon Valley: how techies became the new bankers"]]></title><description><![CDATA[
<p>Sure, the rules are simple, but if you listen to the guy who says that he grew up here, he explains that field has NEVER been reserved. It’s easy to imagine how some bureaucrat updated the rules and accidentally destroyed a nice social space by turning it into a field that people pay for, use, and leave (opposed to a more spontaneous meeting place for young people who aren’t that organized).<p>You can’t fault the Dropbox people there for making a reservation and expecting that it would be valid. They’re a bit clueless in how they respond, though, not realizing that those rules are clashing with the unofficial social dynamics happening there.<p>(Semi-related : That’s why we might feel that banks are assholes for foreclosing houses that belong to deployed soldiers. Legally they can do it, but it sounds like it’s the shittiest application of the law.)</p>
]]></description><pubDate>Wed, 08 Nov 2017 23:31:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=15658192</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=15658192</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15658192</guid></item><item><title><![CDATA[New comment by gyom in "Technical Book on Deep Learning"]]></title><description><![CDATA[
<p>I had not caught that note from the author. Thanks for pointing it out.</p>
]]></description><pubDate>Fri, 08 Sep 2017 03:32:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=15197717</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=15197717</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15197717</guid></item><item><title><![CDATA[New comment by gyom in "Technical Book on Deep Learning"]]></title><description><![CDATA[
<p>It's cool to see that much dedication. It's useful when people take the time to summarize knowledge in a book to serve as reference.<p>But ... I have the feeling that the author, who is relatively new to the field (by his own admission), expanded a lot of formulas and made certain parts of the theory more complicated than it should be.<p>Look around page 60. There are formulas with 6 summation signs in front of them, with all kinds of little indices floating around. How about page 37 ?<p>In a way, the whole point about the chain rule (and software libraries that implement it) is that you can stay in "math world" to do the reasoning, and not think about the job of managing the computation.<p>Same idea with expression as much as possible in terms linear algebra primitives. Matrix multiplication is easier to understand when it's not broken apart into sums whose indices you have to track.</p>
]]></description><pubDate>Thu, 07 Sep 2017 21:42:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=15196014</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=15196014</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15196014</guid></item><item><title><![CDATA[New comment by gyom in "AlphaGo beats Lee Sedol 3-0 [video]"]]></title><description><![CDATA[
<p>That's already a known method to transfer "knowledge" from one model to another. I should double-check before quoting a paper, but I think that this one talks about this (<a href="http://arxiv.org/abs/1503.02531" rel="nofollow">http://arxiv.org/abs/1503.02531</a>).<p>You train many models. Then you "distill" their predictions into one model by using the multiple predictions (from many models) as targets (for the single model trained afterwards).<p>You're right to point out that humans don't do that.<p>I think it would be "cheating" if you train BetaGo on AlphaGo, for the purposes for doing that experiment. The goal would be to have some kind of "clean room" where people fumble around.<p>Of course, you can also run the other experiment to see how fast you can bootstrap BetaGo from AlphaGo. That's also interesting.</p>
]]></description><pubDate>Sat, 12 Mar 2016 19:29:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=11274120</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=11274120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11274120</guid></item><item><title><![CDATA[New comment by gyom in "TensorFlow: open-source library for machine intelligence"]]></title><description><![CDATA[
<p>If you want to train neural nets, you can either rewrite everything from scratch and get a bug-ridden sub-optimal implementation, or you can use a kind of off-the-shelf library.<p>The problem is that there are about 3-5 alternatives out there, and none of them are mature enough or convincing enough to dominate. The field changes so fast that it's easy for them to become obsolete.<p>What you're seeing here is the enthusiasm of people who really want to get a good tool with proper support, and be able to stick with it. I'm still not sure if TensorFlow is that tool, but it depends on what will happen to it during the coming years.</p>
]]></description><pubDate>Mon, 09 Nov 2015 17:17:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=10534281</link><dc:creator>gyom</dc:creator><comments>https://news.ycombinator.com/item?id=10534281</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10534281</guid></item></channel></rss>