<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: drv</title><link>https://news.ycombinator.com/user?id=drv</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 30 Apr 2026 20:19:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=drv" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by drv in "Writing an NVMe Driver in Rust [pdf]"]]></title><description><![CDATA[
<p>I/O scheduler was probably a bad example, since you might not need/want one for fast NVMe devices anyway, but yes, they help ensure limited resources (storage device bandwidth or IOPS) get shared fairly between multiple users/processes, as well as potentially reordering requests to improve batching (this matters more on spinning disks with seek latency, since a strategy of delaying a little bit to sort requests could save more time on seeks than it would spend on the delay+CPU overhead).<p>The more general point is that if you need any of the many features of a general-purpose OS kernel, a full userspace driver may not be a very good fit, since you will end up reinventing a lot of wheels. Cases where it could be a good fit would be things like database backends or dedicated block storage appliances, situations where the OS would just get in the way and where it's viable to dedicate a whole storage device (or several) and a whole CPU (or several) to one task.</p>
]]></description><pubDate>Wed, 29 May 2024 07:42:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=40509485</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=40509485</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40509485</guid></item><item><title><![CDATA[New comment by drv in "Writing an NVMe Driver in Rust [pdf]"]]></title><description><![CDATA[
<p>It's cool to see more systems code written in Rust! I also previously worked on SPDK, so it was neat to see it being chosen as a point of comparison.<p>However, I was waiting for the touted memory safety to be mentioned beyond the introduction, but it never really came up again. I was hoping for the paper to make a stronger argument for memory-safe languages like Rust, something like "our driver did not have bugs X, Y, and Z, which were found in other drivers, because the compiler caught them".<p>Additionally, in a userspace device driver that is given control of a piece of hardware that can do DMA, like an NVMe controller, the most critical memory safety feature is an IOMMU, which the driver covered by the paper does not enable; no amount of memory safety in the driver code itself matters when the hardware can be programmed to read or write anywhere in the physical address space, including memory belonging to other processes or even the kernel, from totally "safe" (in Rust semantics) code.<p>While the driver from the paper may certainly have a "simplified API and less code", I don't expect much of this to be related to the implementation language; it's comparing a clean-sheet minimal design to a project that has been around for a while and has had additional features incrementally added to it over time, making the older codebase inevitably larger and more complex. This doesn't seem like a particularly surprising result or an endorsement of a particular language, though it perhaps does indicate that it would be useful to start from scratch now and again just to see what the minimum viable system can look like. I certainly would have liked to rewrite it in Rust, but that wasn't really feasible. :)<p>In any case, it's great to see proof that a Rust driver can have comparable performance to one written in C, since it will hopefully encourage new code to be written in a nicer language than C. I definitely don't miss having to deal with manual memory management and chasing down use-after-frees now that I write Rust instead of C.<p>(As a side note, I'd encourage anyone thinking of using a userspace storage driver on Linux to check out io_uring first before going all in; if io_uring had existed before SPDK, I don't know that SPDK would have been written, given that io_uring gets you most of the way there performance-wise and integrates nicely with the rest of the kernel. A userspace driver has its uses, but I would consider it to be a last resort after exhausting all other options, since you have to reinvent all of the other functionality normally provided by the kernel like I/O scheduling, filesystems, encryption, etc., not just the NVMe driver itself. That is, assuming the io_uring security issues get resolved over time, and I expect they will.)</p>
]]></description><pubDate>Wed, 29 May 2024 04:10:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=40508367</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=40508367</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40508367</guid></item><item><title><![CDATA[New comment by drv in "An adventure in trying to optimize math.Atan2 with Go assembly"]]></title><description><![CDATA[
<p>The assembly version is using the packed version of the FMA instruction (that's what the "P" in the mnemonic stands for), but as far as I can tell, it's only using one of the packed values, whereas the instruction can calculate two (AVX) or four (AVX2) FMA operations at once.  It might be possible to get some speedup by rearranging the calculation so it can use the full width of the vector registers - at first glance, at least the two sides of the division should be possible to calculate in parallel with half as many FMA instructions.</p>
]]></description><pubDate>Tue, 29 Aug 2017 18:20:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=15126432</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=15126432</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=15126432</guid></item><item><title><![CDATA[New comment by drv in "How many floating-point numbers are in the interval [0,1]?"]]></title><description><![CDATA[
<p>nextAfter is probably also including the denormals (an additional 2^23 values near 0).</p>
]]></description><pubDate>Wed, 01 Mar 2017 22:54:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=13769399</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=13769399</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=13769399</guid></item><item><title><![CDATA[New comment by drv in "Yes, You Have Been Writing SPSC Queues Wrong"]]></title><description><![CDATA[
<p>One possible reason is that storing a bool separately from the index makes it difficult to update the producer or consumer index atomically.  With the implementations that store the full producer and consumer states as a single word each, only single-word atomic operations are necessary to build a lock-free ring.  Storing the bool as the high bit of the index would also suffice.</p>
]]></description><pubDate>Thu, 15 Dec 2016 20:14:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=13187874</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=13187874</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=13187874</guid></item><item><title><![CDATA[New comment by drv in "Demystifying the i-Device NVMe NAND"]]></title><description><![CDATA[
<p>It's presumably not standard Host Memory Buffer, since the spec says "The controller
shall function properly without host memory resources."</p>
]]></description><pubDate>Thu, 17 Nov 2016 20:07:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=12980968</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=12980968</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12980968</guid></item><item><title><![CDATA[New comment by drv in "Disclosing vulnerabilities to protect users"]]></title><description><![CDATA[
<p>2001, not 2011. Time flies. :)</p>
]]></description><pubDate>Tue, 01 Nov 2016 00:24:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=12842302</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=12842302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12842302</guid></item><item><title><![CDATA[New comment by drv in "Adding a phone number to your Google account can make it less secure"]]></title><description><![CDATA[
<p>(Assuming you mean MailChimp)<p>I don't know anything about how MailChimp operates, but a quick search turns up this blog post about how to set up SPF records [1].<p>From there, you can get a list of which IPs MailChimp authorizes as a sender; following the SPF include directive, you can see they specify two IPv4 ranges, both of which are in class C space, so it seems unlikely that MailChimp has their own class A for SMTP senders.<p>[1]: <a href="https://blog.mailchimp.com/senderid-authentication-for-your-mailchimp-campaigns/" rel="nofollow">https://blog.mailchimp.com/senderid-authentication-for-your-...</a></p>
]]></description><pubDate>Thu, 20 Oct 2016 21:03:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=12756140</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=12756140</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12756140</guid></item><item><title><![CDATA[How (and why) FreeDOS keeps DOS alive]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.computerworld.com.au/article/603343/how-why-freedos-keeps-dos-alive/">https://www.computerworld.com.au/article/603343/how-why-freedos-keeps-dos-alive/</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=12123039">https://news.ycombinator.com/item?id=12123039</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 19 Jul 2016 17:10:31 +0000</pubDate><link>https://www.computerworld.com.au/article/603343/how-why-freedos-keeps-dos-alive/</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=12123039</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=12123039</guid></item><item><title><![CDATA[Brillo Common Kernel]]></title><description><![CDATA[
<p>Article URL: <a href="https://android.googlesource.com/device/generic/brillo/+/master/docs/KernelDevelopmentGuide.md">https://android.googlesource.com/device/generic/brillo/+/master/docs/KernelDevelopmentGuide.md</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=11962357">https://news.ycombinator.com/item?id=11962357</a></p>
<p>Points: 15</p>
<p># Comments: 1</p>
]]></description><pubDate>Thu, 23 Jun 2016 16:40:19 +0000</pubDate><link>https://android.googlesource.com/device/generic/brillo/+/master/docs/KernelDevelopmentGuide.md</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=11962357</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11962357</guid></item><item><title><![CDATA[New comment by drv in "Windows Kernel-Mode Drivers Written in Rust"]]></title><description><![CDATA[
<p>It has a C API, but everything under the hood is C++. The source is actually public now: <a href="https://github.com/Microsoft/Windows-Driver-Frameworks" rel="nofollow">https://github.com/Microsoft/Windows-Driver-Frameworks</a></p>
]]></description><pubDate>Wed, 13 Apr 2016 19:36:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=11491468</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=11491468</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11491468</guid></item><item><title><![CDATA[New comment by drv in "Windows Kernel-Mode Drivers Written in Rust"]]></title><description><![CDATA[
<p>Microsoft has been shipping static analysis tools with the Windows DDK for a long time (originally PREfast[1], now Static Driver Verifier[2]). I believe the static analysis is even integrated with Visual Studio now.<p>[1]: <a href="http://research.microsoft.com/en-us/news/features/prefast.aspx" rel="nofollow">http://research.microsoft.com/en-us/news/features/prefast.as...</a>
[2]: <a href="https://msdn.microsoft.com/en-us/library/windows/hardware/ff552808(v=vs.85).aspx" rel="nofollow">https://msdn.microsoft.com/en-us/library/windows/hardware/ff...</a></p>
]]></description><pubDate>Wed, 13 Apr 2016 19:33:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=11491446</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=11491446</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11491446</guid></item><item><title><![CDATA[New comment by drv in "Small, Freestanding Windows Executables"]]></title><description><![CDATA[
<p>In practice, this is probably correct (Microsoft cares a lot about backward compatibility, and many programs depend on MSVCRT.DLL).<p>However, the official word from Microsoft is that MSVCRT.DLL is only intended for operating system components to use, not user applications.  For example, see Raymond Chen's blog on the subject. [1]<p>[1] <a href="https://blogs.msdn.microsoft.com/oldnewthing/20140411-00/?p=1273" rel="nofollow">https://blogs.msdn.microsoft.com/oldnewthing/20140411-00/?p=...</a></p>
]]></description><pubDate>Mon, 01 Feb 2016 16:07:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=11012548</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=11012548</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=11012548</guid></item><item><title><![CDATA[New comment by drv in "Ffmpeg vulnerability allows the attacker to get files from your server or PC"]]></title><description><![CDATA[
<p>Anyone running FFmpeg[1] on untrusted input without sandboxing of some kind is being extremely negligent. It's around a million lines of C that does tricky file format parsing and decoding.  There will definitely be bugs in any given version, and some of those bugs will be exploitable.<p>[1] Or any related tool (ffprobe, etc.), or any tool that uses the libav* libraries, or really any non-trivial multimedia processing tool...</p>
]]></description><pubDate>Wed, 13 Jan 2016 22:50:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=10898287</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=10898287</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10898287</guid></item><item><title><![CDATA[New comment by drv in "Intel Storage Performance Development Kit"]]></title><description><![CDATA[
<p>Ah, I see. The I/OAT DMA copy offload is essentially equivalent to an asynchronous memcpy(), so anything addressable on the memory bus could be a source or destination (with some caveats about alignment requirements and pinned pages if copying to/from RAM).</p>
]]></description><pubDate>Fri, 06 Nov 2015 06:47:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=10518122</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=10518122</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10518122</guid></item><item><title><![CDATA[New comment by drv in "Intel Storage Performance Development Kit"]]></title><description><![CDATA[
<p>The SPDK libraries are mostly storage-specific components (the I/OAT DMA engine can be used for generic copy offload, but it is particularly useful for copying between network and storage buffers).  SPDK itself does not provide any network functionality.<p>I am not familiar enough with the Xeon Phi or GPU programming model to say for sure, but they could possibly be used to offload tasks like hashing/dedup or other storage-related functions.</p>
]]></description><pubDate>Fri, 06 Nov 2015 02:53:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=10517596</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=10517596</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10517596</guid></item><item><title><![CDATA[New comment by drv in "Intel Storage Performance Development Kit"]]></title><description><![CDATA[
<p>The NVMe driver will only work for a fairly narrow set of uses in which the whole NVMe device(s) can be dedicated to a single application (this is because the user-space application takes control of the NVMe device directly, so the kernel driver can't simultaneously use it).<p>Some of the straightforward use cases would be inside network-attached storage appliances (ideally in conjunction with a user-mode network stack) or in a database (database systems already typically want to avoid any OS interference with storage access).  In general, the NVMe driver can be dropped in fairly easily when existing code is using something like Linux AIO with O_DIRECT on a raw block device; the AIO programming model maps quite directly to the NVMe driver programming model (create a queue, submit I/Os, and poll for completions).</p>
]]></description><pubDate>Fri, 06 Nov 2015 00:47:08 +0000</pubDate><link>https://news.ycombinator.com/item?id=10517189</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=10517189</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10517189</guid></item><item><title><![CDATA[New comment by drv in "Intel Storage Performance Development Kit"]]></title><description><![CDATA[
<p>I don't know anything about Omni-Path, sorry.  However, based on the publicly available information, it does look like a very interesting combination.  One major advantage of SPDK over the traditional kernel-provided storage stack is lower latency (by avoiding interrupts and other context switches), and it would fit nicely with a low-latency network stack.</p>
]]></description><pubDate>Fri, 06 Nov 2015 00:40:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=10517166</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=10517166</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10517166</guid></item><item><title><![CDATA[New comment by drv in "Intel Storage Performance Development Kit"]]></title><description><![CDATA[
<p>I am an engineer working at Intel on SPDK, and I can answer any technical questions you might have.<p>Currently SPDK consists of a usermode NVMe (PCIe-attached SSD) driver.  We will soon be releasing a usermode driver for the Intel I/OAT DMA engine (copy offload hardware) that is available on some server platforms.</p>
]]></description><pubDate>Fri, 06 Nov 2015 00:08:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=10517036</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=10517036</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10517036</guid></item><item><title><![CDATA[New comment by drv in "Intel Storage Performance Development Kit"]]></title><description><![CDATA[
<p>It's certainly true that the polled-mode driver model doesn't interact well when the application needs to use other APIs that don't provide a polled mode.  However, SPDK can be used in conjunction with a polled user-mode network stack so that a storage application can operate fully in user mode without any user-to-kernel context switches or hardware interrupts.<p>It's definitely not a drop-in replacement for a kernel storage stack in the general case, but rather an optimization for specific applications (e.g. storage appliances) that can be structured to take advantage of the polled/no-interrupts model.</p>
]]></description><pubDate>Thu, 05 Nov 2015 22:20:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=10516536</link><dc:creator>drv</dc:creator><comments>https://news.ycombinator.com/item?id=10516536</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=10516536</guid></item></channel></rss>