<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ozgune</title><link>https://news.ycombinator.com/user?id=ozgune</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 08:42:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ozgune" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Snowflake AI Escapes Sandbox and Executes Malware]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.promptarmor.com/resources/snowflake-ai-escapes-sandbox-and-executes-malware">https://www.promptarmor.com/resources/snowflake-ai-escapes-sandbox-and-executes-malware</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47427017">https://news.ycombinator.com/item?id=47427017</a></p>
<p>Points: 269</p>
<p># Comments: 83</p>
]]></description><pubDate>Wed, 18 Mar 2026 15:30:07 +0000</pubDate><link>https://www.promptarmor.com/resources/snowflake-ai-escapes-sandbox-and-executes-malware</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=47427017</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47427017</guid></item><item><title><![CDATA[New comment by ozgune in "How to build great products (2013)"]]></title><description><![CDATA[
<p>This has been one of my favorite blog posts on building products at startups. It came up again in a discussion today, so I wanted to resubmit it.<p>I also found the HN discussion at the time informative: <a href="https://news.ycombinator.com/item?id=6457801">https://news.ycombinator.com/item?id=6457801</a></p>
]]></description><pubDate>Thu, 05 Mar 2026 15:20:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=47262590</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=47262590</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47262590</guid></item><item><title><![CDATA[How to build great products (2013)]]></title><description><![CDATA[
<p>Article URL: <a href="https://defmacro.org/2013/09/26/products.html">https://defmacro.org/2013/09/26/products.html</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47262554">https://news.ycombinator.com/item?id=47262554</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 05 Mar 2026 15:18:11 +0000</pubDate><link>https://defmacro.org/2013/09/26/products.html</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=47262554</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47262554</guid></item><item><title><![CDATA[New comment by ozgune in "Bet on German Train Delays"]]></title><description><![CDATA[
<p>I'll save everyone a web search. This is satire and there isn't any such German federal court ruling.<p>It also speaks to the world that we live in these days - I'm having a hard time separating satire from reality.</p>
]]></description><pubDate>Wed, 04 Mar 2026 13:21:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=47247040</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=47247040</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47247040</guid></item><item><title><![CDATA[New comment by ozgune in "Hetzner Prices increase 30-40%"]]></title><description><![CDATA[
<p>I used Hetzner's pricing calculator.<p><a href="https://www.hetzner.com/dedicated-rootserver/ax162-r/configurator/#/" rel="nofollow">https://www.hetzner.com/dedicated-rootserver/ax162-r/configu...</a><p>Before today, we used to be able to order an AX162-R for €207 and add 128 GB of RAM for €46. Starting today, the same calculator provides €207 for an AX162-R (*) and €264 for the 128 GB RAM add-on. Sadly, HN doesn't let me upload screenshots.<p>(*) The price change for AX162-R machines is effective starting April 1st.</p>
]]></description><pubDate>Mon, 23 Feb 2026 13:13:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=47121897</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=47121897</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47121897</guid></item><item><title><![CDATA[New comment by ozgune in "Hetzner Prices increase 30-40%"]]></title><description><![CDATA[
<p>These changes are effective April 1st for existing and new customers. The price increase ratios are also different across product lines.<p>* Cloud (VMs): 38%<p>* Bare metal: 15%<p>* Memory add-on for bare metal: 575% (effective immediately)<p>It feels like memory add-on is intentionally set high to discourage customers from adding more memory.<p>AX102 (128 GB RAM) costs €124, AX162 (256 GB RAM) costs €244, but the 128 GB memory add-on alone costs €264. If we ignore the setup fee, it’s more cost-effective to provision additional servers instead of adding RAM to bare metal instances.<p>Here's the link to cloud and bare metal pricing changes: <a href="https://docs.hetzner.com/general/infrastructure-and-availability/price-adjustment/" rel="nofollow">https://docs.hetzner.com/general/infrastructure-and-availabi...</a></p>
]]></description><pubDate>Mon, 23 Feb 2026 12:27:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47121429</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=47121429</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47121429</guid></item><item><title><![CDATA[New comment by ozgune in "AliSQL: Alibaba's open-source MySQL with vector and DuckDB engines"]]></title><description><![CDATA[
<p>I feel this analysis is unfair to PostgreSQL. PG is highly extensible, allowing you to extend write-ahead logs, transaction subsystem, foreign data wrappers (FDW), indexes, types, replication, others.<p>I understand that MySQL follows a specific pluggable storage architecture. I also understand that the direct equivalent in PG appears to be table access methods (TAM). However, you don't need to use TAM to build this - I'd argue FDWs are much more suitable.<p>Also, I think this design assumes that you'd swap PG's storage engine <i>and</i> replicate data to DuckDB through logical replication. The explanation then notes deficiencies in PG's logical replication.<p>I don't think this is the only possible design. pg_lake provides a solid open source implementation on how else you could build this solution, if you're familiar with PG: <a href="https://github.com/Snowflake-Labs/pg_lake" rel="nofollow">https://github.com/Snowflake-Labs/pg_lake</a><p>All up, I feel this explanation is written from a MySQL-first perspective. "We built this valuable solution for MySQL. We're very familiar with MySQL's internals and we don't think those internals hold for PostgreSQL."<p>I agree with the solution's value and how it integrates with MySQL. I just think someone knowledgeable about PostgreSQL would have built things in a different way.</p>
]]></description><pubDate>Wed, 04 Feb 2026 11:29:59 +0000</pubDate><link>https://news.ycombinator.com/item?id=46884543</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=46884543</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46884543</guid></item><item><title><![CDATA[New comment by ozgune in "Mistral OCR 3"]]></title><description><![CDATA[
<p>Also, do you know if their benchmarks are available?<p>In their website, the benchmarks say “Multilingual (Chinese), Multilingual (East-asian), Multilingual (Eastern europe), Multilingual (English), Multilingual (Western europe), Forms, Handwritten, etc.” However, there’s no reference to the benchmark data.</p>
]]></description><pubDate>Sat, 20 Dec 2025 04:36:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=46333653</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=46333653</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46333653</guid></item><item><title><![CDATA[GitHub walks back plan to charge for self-hosted runners]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.theregister.com/2025/12/17/github_charge_dev_own_hardware/">https://www.theregister.com/2025/12/17/github_charge_dev_own_hardware/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46309821">https://news.ycombinator.com/item?id=46309821</a></p>
<p>Points: 9</p>
<p># Comments: 3</p>
]]></description><pubDate>Thu, 18 Dec 2025 07:22:33 +0000</pubDate><link>https://www.theregister.com/2025/12/17/github_charge_dev_own_hardware/</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=46309821</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46309821</guid></item><item><title><![CDATA[GLM-4.6V: Open-Source Multimodal Models with Native Tool Use]]></title><description><![CDATA[
<p>Article URL: <a href="https://z.ai/blog/glm-4.6v">https://z.ai/blog/glm-4.6v</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46204949">https://news.ycombinator.com/item?id=46204949</a></p>
<p>Points: 4</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 09 Dec 2025 13:54:45 +0000</pubDate><link>https://z.ai/blog/glm-4.6v</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=46204949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46204949</guid></item><item><title><![CDATA[New comment by ozgune in "Pg_lake: Postgres with Iceberg and data lake access"]]></title><description><![CDATA[
<p>This is huge!<p>When people ask me what’s missing in the Postgres market, I used to tell them “open source Snowflake.”<p>Crunchy’s Postgres extension is by far the most ahead solution in the market.<p>Huge congrats to Snowflake and the Crunchy team on open sourcing this.</p>
]]></description><pubDate>Tue, 04 Nov 2025 16:18:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45812670</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=45812670</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45812670</guid></item><item><title><![CDATA[New comment by ozgune in "Benchmarking Postgres 17 vs. 18"]]></title><description><![CDATA[
<p>If the benchmark doesn’t use AIO, why the performance difference between PG 17 and 18 in the blog post (sync, worker, and io_uring)?<p>Is it because remote storage in the cloud always introduces some variance & the benchmark just picks that up?<p>For reference, anarazel had a presentation at pgconf.eu yesterday about AIO. anarazel mentioned that remote cloud storage always introduced variance making the benchmark results hard to interpret. His solution was to introduce synthetic latency on local NVMes for benchmarks.</p>
]]></description><pubDate>Fri, 24 Oct 2025 13:22:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=45694354</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=45694354</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45694354</guid></item><item><title><![CDATA[New comment by ozgune in "DeepSeek OCR"]]></title><description><![CDATA[
<p>OmniAI has a benchmark that companies LLMs to cloud OCR services.<p><a href="https://getomni.ai/blog/ocr-benchmark">https://getomni.ai/blog/ocr-benchmark</a> (Feb 2025)<p>Please note that LLMs progressed at a rapid pace since Feb. We see much better results with the Qwen3-VL family, particularly Qwen3-VL-235B-A22B-Instruct for our use-case.</p>
]]></description><pubDate>Mon, 20 Oct 2025 07:52:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=45640992</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=45640992</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45640992</guid></item><item><title><![CDATA[Bride surprises new husband with an RTX 5090 on wedding day]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.tomshardware.com/pc-components/gpus/bride-surprises-new-husband-with-an-rtx-5090-on-wedding-day-chinese-number-slang-reveals-surprise-gift">https://www.tomshardware.com/pc-components/gpus/bride-surprises-new-husband-with-an-rtx-5090-on-wedding-day-chinese-number-slang-reveals-surprise-gift</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45562830">https://news.ycombinator.com/item?id=45562830</a></p>
<p>Points: 5</p>
<p># Comments: 1</p>
]]></description><pubDate>Sun, 12 Oct 2025 22:58:45 +0000</pubDate><link>https://www.tomshardware.com/pc-components/gpus/bride-surprises-new-husband-with-an-rtx-5090-on-wedding-day-chinese-number-slang-reveals-surprise-gift</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=45562830</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45562830</guid></item><item><title><![CDATA[RubyDramas – Your guide to the last Ruby drama]]></title><description><![CDATA[
<p>Article URL: <a href="https://rubydramas.com">https://rubydramas.com</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45359585">https://news.ycombinator.com/item?id=45359585</a></p>
<p>Points: 6</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 24 Sep 2025 12:51:07 +0000</pubDate><link>https://rubydramas.com</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=45359585</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45359585</guid></item><item><title><![CDATA[New comment by ozgune in "AI Coding: A Sober Review"]]></title><description><![CDATA[
<p>(Disclaimer: Ozgun from Ubicloud)<p>I agree with you. I feel the challenge is that using AI coding tools is still an art, and not a science. That's why we see many qualitative studies that sometimes conflict with each other.<p>In this case, we found the following interesting. That's why we nudged Shikhar to blog about his experience and put a disclaimer at the top.<p>* Our codebase is in Ruby and follows a design pattern uncommon industry
* We don't have a horse in this game
* I haven't seen an evaluation that evaluates coding tools in (a) coding, (b) testing, and (c) debugging dimension</p>
]]></description><pubDate>Wed, 17 Sep 2025 14:55:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=45276605</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=45276605</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45276605</guid></item><item><title><![CDATA[New comment by ozgune in "Deploying DeepSeek on 96 H100 GPUs"]]></title><description><![CDATA[
<p>The SGLang Team has a follow-up blog post that talks about DeepSeek inference performance on GB200 NVL72: <a href="https://lmsys.org/blog/2025-06-16-gb200-part-1/" rel="nofollow">https://lmsys.org/blog/2025-06-16-gb200-part-1/</a><p>Just in case you have $3-4M lying around somewhere for some high quality inference. :)<p>SGLang quotes a 2.5-3.4x speedup as compared to the H100s. They also note that more optimizations are coming, but they haven't yet published a part 2 on the blog post.</p>
]]></description><pubDate>Fri, 29 Aug 2025 16:18:02 +0000</pubDate><link>https://news.ycombinator.com/item?id=45066036</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=45066036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45066036</guid></item><item><title><![CDATA[New comment by ozgune in "Are OpenAI and Anthropic losing money on inference?"]]></title><description><![CDATA[
<p>I agree that you could get to high margins, but I think the modeling holds only if you're an AI lab operating at scale with a setup tuned for your model(s). I think the most open study on this one is from the DeepSeek team: <a href="https://github.com/deepseek-ai/open-infra-index/blob/main/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md" rel="nofollow">https://github.com/deepseek-ai/open-infra-index/blob/main/20...</a><p>For others, I think the picture is different. When we ran benchmarks on DeepSeek-R1 on 8x H200 SXM using vLLM, we got up to 12K total tok/s (concurrency 200, input:output ratio of 6:1). If you're spiking up 100-200K tok/s, you need a lot of GPUs for that. Then, the GPUs sit idle most of the time.<p>I'll read the blog post in more detail, but I don't think the following assumptions hold outside of AI labs.<p>* 100% utilization (no spikes, balanced usage between day/night or weekdays)
* Input processing is free (~$0.001 per million tokens)
* DeepSeek fits into H100 cards in a way that network isn't the bottleneck</p>
]]></description><pubDate>Thu, 28 Aug 2025 19:09:48 +0000</pubDate><link>https://news.ycombinator.com/item?id=45055835</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=45055835</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45055835</guid></item><item><title><![CDATA[New comment by ozgune in "95% of Companies See 'Zero Return' on $30B Generative AI Spend"]]></title><description><![CDATA[
<p>Previously discussed here: <a href="https://news.ycombinator.com/item?id=44941118">https://news.ycombinator.com/item?id=44941118</a><p>It's also disappointing that MIT requires you to fill out a form (and wait for) access to the report. I read four separate stories based on the report, and they all provide a different perspective.<p>Here's the original pdf before MIT started gating it: <a href="https://web.archive.org/web/20250818145714/https://nanda.media.mit.edu/ai_report_2025.pdf" rel="nofollow">https://web.archive.org/web/20250818145714/https://nanda.med...</a></p>
]]></description><pubDate>Thu, 21 Aug 2025 17:06:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=44975271</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=44975271</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44975271</guid></item><item><title><![CDATA[New comment by ozgune in "Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model"]]></title><description><![CDATA[
<p>This is a very impressive general purpose LLM (GPT 4o, DeepSeek-V3 family). It’s also open source.<p>I think it hasn’t received much attention because the frontier shifted to reasoning and multi-modal AI models. In accuracy benchmarks, all the top models are reasoning ones:<p><a href="https://artificialanalysis.ai/" rel="nofollow">https://artificialanalysis.ai/</a><p>If someone took Kimi k2 and trained a reasoning model with it, I’d be curious how that model performs.</p>
]]></description><pubDate>Sat, 12 Jul 2025 18:26:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44543960</link><dc:creator>ozgune</dc:creator><comments>https://news.ycombinator.com/item?id=44543960</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44543960</guid></item></channel></rss>