<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: memexy</title><link>https://news.ycombinator.com/user?id=memexy</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 20 Apr 2026 21:47:47 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=memexy" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by memexy in "PG: The biggest source of stress for me at YC was running HN"]]></title><description><![CDATA[
<p>I merely outlined what I would want if I was a moderator. I would rather receive email with statistical analysis than be compared to Hitler and Stalin without any data to back it up. It would be way funnier if someone proved statistically that I was Hitler and Stalin at the same time. They'd have to go through a lot of trouble to actually do that and if they managed to do so then that would be some high art.<p>Any complaint without data to back it up would be thrown in the trash pile.<p>In any case. It's a worthwhile experiment to try because it can't make your life worse. I can't really imagine anything worse than being compared to Hitler and Stalin especially if all that person is doing is just venting their anger. I'd want to avoid being the target of that anger and I would require mathematical analysis from anyone that claimed to be justifiably angry to show the actual justification for their anger. Without data you will continue to get hate mail that's nothing more than people making up a story to justify their own anger. And you have already noticed the personal narrative angle so I'm not telling you anything new here. The data takes away the "personal" part of the narrative which I think is an improvement.</p>
]]></description><pubDate>Sun, 12 Jul 2020 02:45:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=23808306</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23808306</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23808306</guid></item><item><title><![CDATA[New comment by memexy in "PG: The biggest source of stress for me at YC was running HN"]]></title><description><![CDATA[
<p>Yes. That's what I mean. If there is an API then we can use mathematical models to answer questions about bias or lack thereof.<p>I also don't think that it's possible to have any forum without bias so the data I'm certain will indicate bias but at least it will be transparent and obvious so people can point to actual data to make their case one way or the other. It's hard to improve a situation if there is no data to point to and argue about. Without data people just tell stories about whatever makes the most sense from whatever sparse data they have managed to reverse engineer from personal observations.</p>
]]></description><pubDate>Sun, 12 Jul 2020 02:08:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=23808088</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23808088</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23808088</guid></item><item><title><![CDATA[New comment by memexy in "PG: The biggest source of stress for me at YC was running HN"]]></title><description><![CDATA[
<p>I think there is a solution to this problem. If moderator decisions are made and recorded publicly then the data can at least be analyzed objectively. If there is indeed a bias then someone should be able to sit down and do the statistical analysis and show that "Yes, X type of stories / comments are more consistently flagged / removed / downvoted / etc." or "No, there is actually no bias in this instance".<p>I think there is contention right now because moderator decisions are opaque so people come up with their own narratives. Without actual data there is no way to tell what type of bias exists and why so it's easy to make up a personal narrative that is not backed with any actual data.<p>User flagging is also currently opaque and a similar argument applies. If I have to provide a reason for why I flagged something and will know that my name will be publicly associated with which items I've flagged then I will be much more careful. Right now, flagging anything is consequence free because it is opaque.</p>
]]></description><pubDate>Sun, 12 Jul 2020 01:42:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=23807944</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23807944</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23807944</guid></item><item><title><![CDATA[New comment by memexy in "The Case for Causal AI"]]></title><description><![CDATA[
<p>> What can be known in a system without rigor? That’s the question to make rigorous, I think.<p>Who is working on making that rigorous?</p>
]]></description><pubDate>Sun, 12 Jul 2020 01:28:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=23807851</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23807851</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23807851</guid></item><item><title><![CDATA[Portal : Current Events]]></title><description><![CDATA[
<p>Article URL: <a href="https://en.wikipedia.org/wiki/Portal:Current_events">https://en.wikipedia.org/wiki/Portal:Current_events</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=23807662">https://news.ycombinator.com/item?id=23807662</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sun, 12 Jul 2020 00:54:14 +0000</pubDate><link>https://en.wikipedia.org/wiki/Portal:Current_events</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23807662</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23807662</guid></item><item><title><![CDATA[New comment by memexy in "The Case for Causal AI"]]></title><description><![CDATA[
<p>Thanks.</p>
]]></description><pubDate>Sun, 12 Jul 2020 00:38:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=23807564</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23807564</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23807564</guid></item><item><title><![CDATA[New comment by memexy in "The Case for Causal AI"]]></title><description><![CDATA[
<p>Interesting. I wonder if someone has tried to combine the two. I guess modern deep reinforcement learning is one such combination because it combines feedback (reinforcement) and probabilistic descriptions but maybe there are other interesting combinations of probability, causality, and feedback.</p>
]]></description><pubDate>Sun, 12 Jul 2020 00:35:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=23807545</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23807545</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23807545</guid></item><item><title><![CDATA[New comment by memexy in "Refined Hacker News"]]></title><description><![CDATA[
<p>I think adding "Most Favorited" would create a popularity contest and people would start looking for ways to game the system. I don't think favorites should have metrics associated with them because as soon as metrics are introduced people will try to optimize them.<p>Now that I know comments can be favorited I plan to bookmark comments that include useful reference information on topics I find interesting. Adding counters for how many times the comment was favorited wouldn't really help me with that use case because I doubt anyone else cares about collecting useful references so my favorites would never make it to the "most favorited" list. I personally don't care if I make it to the list or not but I'm certain some people would care and they would go around and start playing a popularity contest instead of looking for ways to favorite information that would be useful to them.</p>
]]></description><pubDate>Sun, 12 Jul 2020 00:17:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=23807432</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23807432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23807432</guid></item><item><title><![CDATA[New comment by memexy in "The Case for Causal AI"]]></title><description><![CDATA[
<p>But why is that a causal explanation? If I can write down a simulation of planetary motion then that doesn't necessarily explain the causal mechanism behind why the planets actually move. In fact, there are simulations for planetary motion and none of them are causal explanations because they don't actually move the planets.</p>
]]></description><pubDate>Sun, 12 Jul 2020 00:06:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=23807373</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23807373</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23807373</guid></item><item><title><![CDATA[Aristotle, Kant, and Evolution (Summary and Notes)]]></title><description><![CDATA[
<p>Article URL: <a href="https://medium.com/@markmulvey/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-6-summary-notes-ad73481beba0">https://medium.com/@markmulvey/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-6-summary-notes-ad73481beba0</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=23807130">https://news.ycombinator.com/item?id=23807130</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 11 Jul 2020 23:28:19 +0000</pubDate><link>https://medium.com/@markmulvey/awakening-from-the-meaning-crisis-by-john-vervaeke-ep-6-summary-notes-ad73481beba0</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23807130</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23807130</guid></item><item><title><![CDATA[New comment by memexy in "The Case for Causal AI"]]></title><description><![CDATA[
<p>Thanks.</p>
]]></description><pubDate>Sat, 11 Jul 2020 21:41:11 +0000</pubDate><link>https://news.ycombinator.com/item?id=23806298</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23806298</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23806298</guid></item><item><title><![CDATA[Exposing misleading argumentation techniques reduces their influence]]></title><description><![CDATA[
<p>Article URL: <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0175799">https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0175799</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=23806096">https://news.ycombinator.com/item?id=23806096</a></p>
<p>Points: 3</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 11 Jul 2020 21:18:01 +0000</pubDate><link>https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0175799</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23806096</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23806096</guid></item><item><title><![CDATA[New comment by memexy in "The Case for Causal AI"]]></title><description><![CDATA[
<p>The article outlines two approaches to <i>causal AI</i><p>> There are two approaches to causal AI that are based on long-known principles: the <i>potential outcomes framework</i> and <i>causal graph models</i>. Both approaches make it possible to test the effects of a potential intervention using real-world data. What makes them AI are the powerful underlying algorithms used to reveal the causal patterns in large data sets. But they differ in the number of potential causes that they can test for.<p>Does anyone have references and tutorials for either approach?</p>
]]></description><pubDate>Sat, 11 Jul 2020 21:06:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=23805988</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23805988</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23805988</guid></item><item><title><![CDATA[New comment by memexy in "The Case for Causal AI"]]></title><description><![CDATA[
<p>How does having a program / algorithm and checking it on various input values help with understanding causality?</p>
]]></description><pubDate>Sat, 11 Jul 2020 21:04:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=23805970</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23805970</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23805970</guid></item><item><title><![CDATA[New comment by memexy in "The Case for Causal AI"]]></title><description><![CDATA[
<p>I guess it's tricky because the real world is full of feedback loops. If you want a causal model for fake news then your model needs to include some representation of incentives for ad revenue and clickbait. How does the causal inference framework handle feedback loops?</p>
]]></description><pubDate>Sat, 11 Jul 2020 21:03:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=23805951</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23805951</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23805951</guid></item><item><title><![CDATA[The Yoneda lemma in the category of matrices [video]]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.youtube.com/watch?v=SsgEvrDFJsM">https://www.youtube.com/watch?v=SsgEvrDFJsM</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=23805344">https://news.ycombinator.com/item?id=23805344</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 11 Jul 2020 19:55:43 +0000</pubDate><link>https://www.youtube.com/watch?v=SsgEvrDFJsM</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23805344</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23805344</guid></item><item><title><![CDATA[New comment by memexy in "Slate Star Codex and Silicon Valley’s War Against the Media"]]></title><description><![CDATA[
<p>Thanks.</p>
]]></description><pubDate>Sat, 11 Jul 2020 19:12:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=23804889</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23804889</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23804889</guid></item><item><title><![CDATA[New comment by memexy in "Show HN: Free multi-channel portal to create quiz/test"]]></title><description><![CDATA[
<p>No problem.</p>
]]></description><pubDate>Sat, 11 Jul 2020 19:05:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=23804811</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23804811</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23804811</guid></item><item><title><![CDATA[New comment by memexy in "Slate Star Codex and Silicon Valley’s War Against the Media"]]></title><description><![CDATA[
<p>> I am sorry that <i>you</i> consider genuine attempts to get real answers to be trolling with no explanation as to twhy.<p>That's called shifting the blame because you're shifting the responsibility and consequences of your actions onto someone else. That is not how one begins an apology. An apology begins as "<i>I</i> am sorry. <i>I</i> will reflect and consider the feedback given and do better next time. Please feel free to give further feedback if <i>you feel like it</i> and <i>I will consider</i> it and improve my behavior".</p>
]]></description><pubDate>Sat, 11 Jul 2020 19:02:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=23804775</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23804775</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23804775</guid></item><item><title><![CDATA[New comment by memexy in "Show HN: Free multi-channel portal to create quiz/test"]]></title><description><![CDATA[
<p>I just visited the web page and saw the following message<p>> Internet Explorer not supported<p>I'm using firefox. It's better to perform browser detection and show the message if you detect that I'm actually using internet explorer. Otherwise it seems like something is wrong with my browser.<p>Here's a stackoverflow answer for how to perform browser detection with JavaScript: <a href="https://stackoverflow.com/questions/2400935/browser-detection-in-javascript" rel="nofollow">https://stackoverflow.com/questions/2400935/browser-detectio...</a>.</p>
]]></description><pubDate>Sat, 11 Jul 2020 18:50:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=23804652</link><dc:creator>memexy</dc:creator><comments>https://news.ycombinator.com/item?id=23804652</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23804652</guid></item></channel></rss>