<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: tabacof</title><link>https://news.ycombinator.com/user?id=tabacof</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 04 May 2026 08:40:05 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=tabacof" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Slower Feels Smarter? Experimenting with AI Agent Latency]]></title><description><![CDATA[
<p>Article URL: <a href="https://fin.ai/research/does-slower-seem-smarter-rethinking-latency-in-ai-agents/">https://fin.ai/research/does-slower-seem-smarter-rethinking-latency-in-ai-agents/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43717901">https://news.ycombinator.com/item?id=43717901</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Thu, 17 Apr 2025 15:00:11 +0000</pubDate><link>https://fin.ai/research/does-slower-seem-smarter-rethinking-latency-in-ai-agents/</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=43717901</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43717901</guid></item><item><title><![CDATA[How I lost 1000€ betting on CS:GO]]></title><description><![CDATA[
<p>I wrote two blog posts based on my experience betting on CS:GO in 2019.<p>The first post covers the following topics:<p>* What is your edge?<p>* Financial decision-making with ML<p>* One bet: Expected profits and decision rule<p>* Multiple bets: The Kelly criterion<p>* Probability calibration<p>* Winner’s curse<p>Link: https://tabacof.github.io/posts/csgo_betting_with_ml_part_1/csgo_betting_with_ml_part_1.html<p>The second post covers the following topics:<p>* CS:GO basics<p>* Data scraping<p>* Feature engineering<p>* TrueSkill<p>* Dataset<p>* Modelling<p>* Evaluation<p>* Backtesting<p>* Why I lost 1000 euros<p>Link: https://tabacof.github.io/posts/csgo_betting_with_ml_part_2/csgo_betting_with_ml_part_2.html<p>I hope they can be useful. All the code and datasets are freely available on Github. Let me know if you have any feedback!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=40955322">https://news.ycombinator.com/item?id=40955322</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 13 Jul 2024 17:11:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=40955322</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=40955322</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40955322</guid></item><item><title><![CDATA[New comment by tabacof in "I lost 1000€ betting on CS:GO – Foundations"]]></title><description><![CDATA[
<p>This post covers the foundations of e-sports betting with machine learning: financial decision-making, expected profits of a bet, multiple bets with the Kelly criterion, probability calibration, and the winner’s curse.</p>
]]></description><pubDate>Thu, 04 Jan 2024 09:04:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=38864767</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=38864767</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38864767</guid></item><item><title><![CDATA[I lost 1000€ betting on CS:GO – Foundations]]></title><description><![CDATA[
<p>Article URL: <a href="https://tabacof.github.io/posts/csgo_betting_with_ml_part_1/csgo_betting_with_ml_part_1.html">https://tabacof.github.io/posts/csgo_betting_with_ml_part_1/csgo_betting_with_ml_part_1.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=38864766">https://news.ycombinator.com/item?id=38864766</a></p>
<p>Points: 2</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 04 Jan 2024 09:04:22 +0000</pubDate><link>https://tabacof.github.io/posts/csgo_betting_with_ml_part_1/csgo_betting_with_ml_part_1.html</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=38864766</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=38864766</guid></item><item><title><![CDATA[New comment by tabacof in "Ask HN: Could you share your personal blog here?"]]></title><description><![CDATA[
<p>I started this blog recently, though some posts were written a few years ago (one even reached HN front page!): <a href="https://tabacof.github.io/" rel="nofollow noreferrer">https://tabacof.github.io/</a></p>
]]></description><pubDate>Tue, 04 Jul 2023 16:45:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=36589157</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=36589157</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36589157</guid></item><item><title><![CDATA[New comment by tabacof in "Real-Time ML Models with Serverless AWS"]]></title><description><![CDATA[
<p>I wrote the blog post above based on the learnings I had when creating a MLOps solution using AWS Lambda for machine learning model deployment.<p>Tl;dr: The post shows how to deploy a scikit-learn model pickle using Lambda and how to create a POST endpoint with API gateway.<p>Any feedback is welcome!</p>
]]></description><pubDate>Mon, 05 Jun 2023 15:49:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=36198058</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=36198058</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36198058</guid></item><item><title><![CDATA[Real-Time ML Models with Serverless AWS]]></title><description><![CDATA[
<p>Article URL: <a href="https://tabacof.github.io/posts/serverless_model_deployment/serverless_model_deployment.html">https://tabacof.github.io/posts/serverless_model_deployment/serverless_model_deployment.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=36198057">https://news.ycombinator.com/item?id=36198057</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 05 Jun 2023 15:49:37 +0000</pubDate><link>https://tabacof.github.io/posts/serverless_model_deployment/serverless_model_deployment.html</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=36198057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36198057</guid></item><item><title><![CDATA[New comment by tabacof in "Name classification with ChatGPT: How does it compare to ML language models?"]]></title><description><![CDATA[
<p>In this blog post, I explore the problem of name classification with ChatGPT and 3 ML models of increasing complexity (logistic regression, FastAI LSTM, and HuggingFace DistilBERT).<p>ChatGPT delivers the best accuracy of them all with no model training, just prompt engineering. It classifies 100k names in 18 minutes for under $5.<p>We see a lot of ChatGPT chat examples, but here I show how to use its API to solve an actual text classification problem (albeit a simple one).<p>GPT is transforming tasks that required deep machine learning knowledge into software + prompt engineering problems. As a data scientist, I’m not worried about it taking over my job, as predictive modelling is only a small aspect of what a data scientist does.</p>
]]></description><pubDate>Mon, 27 Mar 2023 10:49:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=35325049</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=35325049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35325049</guid></item><item><title><![CDATA[Name classification with ChatGPT: How does it compare to ML language models?]]></title><description><![CDATA[
<p>Article URL: <a href="https://tabacof.github.io/posts/name_classification/name_classification.html">https://tabacof.github.io/posts/name_classification/name_classification.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=35325048">https://news.ycombinator.com/item?id=35325048</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 27 Mar 2023 10:49:07 +0000</pubDate><link>https://tabacof.github.io/posts/name_classification/name_classification.html</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=35325048</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35325048</guid></item><item><title><![CDATA[New comment by tabacof in "Our experience with Stripe Atlas (2017)"]]></title><description><![CDATA[
<p>Hi Soneca, I am also a Brazilian starting two very modest side business intending to sell to international customers:<p>1) <a href="https://www.deploir.com" rel="nofollow">https://www.deploir.com</a><p>2) <a href="https://www.whoworth.com" rel="nofollow">https://www.whoworth.com</a><p>What a coincidence, huh?<p>In the beginning I also considered Atlas, but that seemed way too much trouble and expenses for two businesses that may or may not be viable.<p>In the end, I decided to go with Paddle (<a href="https://paddle.com/" rel="nofollow">https://paddle.com/</a>). The integration was really easy, it is just a button that links to my subscription plans (with webhooks that alert my server when subscriptions are created and paid).<p>As Brazilians say, we should "exchange stickers".</p>
]]></description><pubDate>Tue, 10 Apr 2018 14:31:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=16801625</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=16801625</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=16801625</guid></item><item><title><![CDATA[New comment by tabacof in "Why old-school PostgreSQL is so hip again"]]></title><description><![CDATA[
<p>>Spark is a terribly inefficient solution to any known/stable data processing or analytics jobs.<p>Can you expand on that?</p>
]]></description><pubDate>Mon, 11 Dec 2017 18:10:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=15898921</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=15898921</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15898921</guid></item><item><title><![CDATA[New comment by tabacof in "Four-year professional visas get green light in Thailand"]]></title><description><![CDATA[
<p>Latin America is very unsafe. The list of top 50 cities by murder rate is very telling: <a href="https://en.wikipedia.org/wiki/List_of_cities_by_murder_rate" rel="nofollow">https://en.wikipedia.org/wiki/List_of_cities_by_murder_rate</a><p>I have lived most of my life in Brazil, and even though I am very privileged, the following has happened to me:
1) Watch stolen by drug addict with a syringe when I was 12 years old
2) Cellphone stolen by a thug with a glass shard when I was 16
3) Also when I was 16, three thugs tried to steal my cellphone, they said they would blow my brains if I didn't comply and faked having a gun (I didn't).
4) Two thugs invaded my frat house, held us hostages with handguns, and stole all we had (I will never forget the cold metal of the gun touching my head, while I was threatened).
5) My car was jacked right in front of my house.
6) Another cellphone stealing incident that I'd rather not share in details.
7) My spare tire was stolen while I was in a bar for a couple of hours.<p>This is just what has happened to me in less than 30 years. Stories like those abound. My girlfriend also went through (4) and (7), independently of me. Pretty much everyone I know has had a cellhphone stolen at some point in their lives.<p>If you want to actually experience the dread that is living in Brazil, search for "brazil" in reddit's /r/watchpeopledie.</p>
]]></description><pubDate>Sun, 20 Aug 2017 18:51:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=15059706</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=15059706</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15059706</guid></item><item><title><![CDATA[New comment by tabacof in "Brazilian Judge Shuts Down WhatsApp for 48 Hours"]]></title><description><![CDATA[
<p>This is insane, this judge must be on a power trip. WhatsApp is now part of Brazilian social life and economy. Everyone here is part of many groups of friends, family or workmates, that is where most instant communication happens.<p>In my company, our deployment engineers, who usually are on very remote places with bad and unreliable internet, rely on WhatsApp. I'm not saying this is the best practice, but this is simply the way Brazil works right now. Even the mobile phone companies offer plans with free WhatsApp connection, because that is what most people here care about. Another example: In Brazil, 9 in 10 doctors use WhatsApp to talk to patients (<a href="http://www.cityam.com/230372/digital-health-wearables-and-apps-9-in-10-brazilian-doctors-use-whatsapp-to-talk-to-patients" rel="nofollow">http://www.cityam.com/230372/digital-health-wearables-and-ap...</a>).<p>To disregard all the people and businesses that rely on WhatsApp for whatever reason is unbelievable. But this is not without precedent, once another Brazilian judge blocked YouTube for a whole day because it refused to take down a celebrity video.<p>This says a lot about the over-sized, inefficient, and stupid state we have, always meddling and intervening.</p>
]]></description><pubDate>Thu, 17 Dec 2015 03:50:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=10749391</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=10749391</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10749391</guid></item><item><title><![CDATA[New comment by tabacof in "A Brief Introduction to Graphical Models and Bayesian Networks (1998)"]]></title><description><![CDATA[
<p>Causality is a bit harder to integrate with current machine learning models as it's hard even with standard probabilistic graphical models. On the other hand, there has been a lot of work integrating deep neural networks with probabilistic models.<p>For example, the variational auto-encoders are a graphical model with Gaussian latent variables whose mean and variance are determined by (deep) neural networks [1]. There has been work exploring the neural network weights as latent variables themselves [2]. Finally, some new developments such as dropout can be interpreted as some form of deep Gaussian processes [3].<p>I believe there will be a lot further developments on this area in the near-future.<p>[1] <a href="http://arxiv.org/abs/1312.6114" rel="nofollow">http://arxiv.org/abs/1312.6114</a><p>[2] <a href="http://arxiv.org/abs/1505.05424" rel="nofollow">http://arxiv.org/abs/1505.05424</a><p>[3] <a href="http://arxiv.org/abs/1506.02142" rel="nofollow">http://arxiv.org/abs/1506.02142</a></p>
]]></description><pubDate>Wed, 02 Sep 2015 20:15:07 +0000</pubDate><link>https://news.ycombinator.com/item?id=10161647</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=10161647</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10161647</guid></item><item><title><![CDATA[New comment by tabacof in "Compiling Julia for NVIDIA GPUs"]]></title><description><![CDATA[
<p>Torch7 has amazing GPU support. If you wanna send a tensor (array) to the GPU, you just type array:cuda(). All operations you do from now on will be on the GPU.<p>More involved example using a neural network: <a href="http://code.cogbits.com/wiki/doku.php?id=tutorial_cuda" rel="nofollow">http://code.cogbits.com/wiki/doku.php?id=tutorial_cuda</a></p>
]]></description><pubDate>Tue, 03 Feb 2015 23:17:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=8994223</link><dc:creator>tabacof</dc:creator><comments>https://news.ycombinator.com/item?id=8994223</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=8994223</guid></item></channel></rss>