<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: JedMartin</title><link>https://news.ycombinator.com/user?id=JedMartin</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Wed, 29 Apr 2026 21:41:08 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=JedMartin" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by JedMartin in "I almost got hacked by a 'job interview'"]]></title><description><![CDATA[
<p>.</p>
]]></description><pubDate>Thu, 16 Oct 2025 00:47:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=45600233</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=45600233</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45600233</guid></item><item><title><![CDATA[New comment by JedMartin in "Wall Street’s ‘Private Rooms’"]]></title><description><![CDATA[
<p>You would change the rules, but I think the result would largely remain the same. As a market participant with the fastest access to data from other markets, news, and similar sources, as well as low order entry latency, you would still be able to profit from information asymmetry.<p>Imagine that a company announces the approval of its new vaccine a few milliseconds before the periodic trade occurs. As an HFT firm, you have the technology to enter, cancel, or modify your orders before the periodic auction takes place, while less sophisticated players remain oblivious to what just happened. The same applies to price movements on venues trading the same instrument, its derivatives, or even correlated assets in different parts of the world.<p>On the other hand, you risk increasing price volatility (especially in cases where there is an imbalance between buyers and sellers during the periodic auction) and making markets less liquid.</p>
]]></description><pubDate>Tue, 18 Mar 2025 13:44:33 +0000</pubDate><link>https://news.ycombinator.com/item?id=43399346</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=43399346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43399346</guid></item><item><title><![CDATA[New comment by JedMartin in "Wall Street’s ‘Private Rooms’"]]></title><description><![CDATA[
<p>IntelligentCross Midpoint (a darkpool) is a better example, since it actually does matching periodically every couple of milliseconds [1]. IEX just introduces additional latency for everyone.<p>[1] <a href="https://www.imperativex.com/products" rel="nofollow">https://www.imperativex.com/products</a></p>
]]></description><pubDate>Tue, 18 Mar 2025 13:15:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=43399067</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=43399067</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43399067</guid></item><item><title><![CDATA[New comment by JedMartin in "Wall Street’s ‘Private Rooms’"]]></title><description><![CDATA[
<p>It's actually the other way around. As a big fund looking to trade a large number of shares in the public market, you'll quickly realize that the market tends to move away from you, and statistically, you're more likely to get a bad deal than a good one. Even if you try to be smart about execution by splitting your orders into chunks, randomizing order sizes, and similar tactics, there is still a huge information asymmetry between you and more sophisticated players. In many cases, they can classify your orders based on different characteristics of your order flow (such as latency profile), distinguishing them from so-called toxic flow from other HFT firms.<p>The purpose of these private rooms is to separate your orders from those players so that you trade against other uninformed parties, making your chances of getting a good or bad deal closer to 50/50.</p>
]]></description><pubDate>Tue, 18 Mar 2025 00:10:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=43394234</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=43394234</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43394234</guid></item><item><title><![CDATA[New comment by JedMartin in "C++ patterns for low-latency applications including high-frequency trading"]]></title><description><![CDATA[
<p>I have absolutely no idea how this works in Java, but in C++, there are a few reasons you need std::atomic here:<p>1. You need to make sure that modifying the producer/consumer position is actually atomic. This may end up being the same instruction that the compiler would use for modifying a non-atomic variable, but that will depend on your target architecture and the size of the data type. Without std::atomic, it may also generate multiple instructions to implement that load/store or use an instruction which is non-atomic at the CPU level. See [1] for more information.<p>2. You're using positions for synchronization between the producer and consumer. When incrementing the reader position, you're basically freeing a slot for the producer, which means that you need to make sure all reads happen before you do it. When incrementing the producer position, you're indicating that the slot is ready to be consumed, so you need to make sure that all the stores to that slot happen before that. Things may go wrong here due to reordering by the compiler or by the CPU [2], so you need to instruct both that a certain memory ordering is required here. Reordering by the compiler can be prevented using a compiler-level memory barrier - asm volatile("" ::: "memory"). Depending on your CPU architecture, you may or may not need to add a memory barrier instruction as well to prevent reordering by the CPU at runtime. The good news is that std::atomic does all that for you if you pick the right memory ordering, and by default, it uses the strongest one (sequentially-consistent ordering). I think in this particular case you could relax the constraints a bit and use memory_order_acquire on the consumer side and memory_order_release on the producer side [3].<p>[1] <a href="https://preshing.com/20130618/atomic-vs-non-atomic-operations/" rel="nofollow">https://preshing.com/20130618/atomic-vs-non-atomic-operation...</a><p>[2] <a href="https://en.wikipedia.org/wiki/Memory_ordering" rel="nofollow">https://en.wikipedia.org/wiki/Memory_ordering</a><p>[3] <a href="https://en.cppreference.com/w/cpp/atomic/memory_order" rel="nofollow">https://en.cppreference.com/w/cpp/atomic/memory_order</a></p>
]]></description><pubDate>Tue, 09 Jul 2024 14:46:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=40916762</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=40916762</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40916762</guid></item><item><title><![CDATA[New comment by JedMartin in "C++ patterns for low-latency applications including high-frequency trading"]]></title><description><![CDATA[
<p>It's not easy to get data structures like this right in C++. There are a couple of problems with your implementation of the queue.
Memory accesses can be reordered by both the compiler and the CPU, so you should use std::atomic for your producer and consumer positions to get the barriers described in the original LMAX Disruptor paper.
In the get method, you're returning a pointer to the element within the queue after bumping the consumer position (which frees the slot for the producer), so it can get overwritten while the user is accessing it.
And then your producer and consumer positions will most likely end up in the same cache line, leading to false sharing.</p>
]]></description><pubDate>Mon, 08 Jul 2024 22:27:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=40910420</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=40910420</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40910420</guid></item><item><title><![CDATA[New comment by JedMartin in "Low Latency Optimization: Understanding Pages (Part 1)"]]></title><description><![CDATA[
<p>The most benefit comes from the fact that you end up with a lot less TLB misses, since single mapping covers a large chunk of memory. Predictable memory access pattern helps with caches misses thanks to hardware prefetch, but as far as I know hardware prefetch won't work if it would cause TLB miss on most CPUs.</p>
]]></description><pubDate>Tue, 29 Nov 2022 14:28:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=33787981</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=33787981</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33787981</guid></item><item><title><![CDATA[New comment by JedMartin in "Low Latency Optimization: Understanding Pages (Part 1)"]]></title><description><![CDATA[
<p>Thanks, cool stuff. Especially liblppreload.so described in [2] and [3]. I'll give it a try. Do you have any tips how to achieve the same for the stack?</p>
]]></description><pubDate>Tue, 29 Nov 2022 13:36:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=33787366</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=33787366</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33787366</guid></item><item><title><![CDATA[New comment by JedMartin in "Low Latency Optimization: Understanding Pages (Part 1)"]]></title><description><![CDATA[
<p>The part about getting everything into hugepages sounds interesting. Any idea where can I find some resources on that? Most of what I was able to find only tell you how to do that for heap allocations.</p>
]]></description><pubDate>Tue, 29 Nov 2022 11:38:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=33786294</link><dc:creator>JedMartin</dc:creator><comments>https://news.ycombinator.com/item?id=33786294</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=33786294</guid></item></channel></rss>