<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ResearchAtPlay</title><link>https://news.ycombinator.com/user?id=ResearchAtPlay</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 02:06:04 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ResearchAtPlay" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by ResearchAtPlay in "German implementation of eIDAS will require an Apple/Google account to function"]]></title><description><![CDATA[
<p>Do you happen to know if German citizens can obtain a certificate to sign PDFs (from the government / for free)?<p>Several paid providers for X.509 certificates exist but document signing certificates cost around 80 € per year [0]. And if I want duplicate X.509 certificates for my redundant Yubikeys then the cost doubles.<p>Other providers require an initial deposit and then charge per signature [1], which leads to intransparent pricing. In the interest of open commerce, I strongly believe that securely signing an electronic document should cost the same as my manual signature, i.e. nothing.<p>A partial solution already exists because I can use my electronic ID card with the AusweisApp to prove my identity when interacting with German authorities. This feature is generally useful because I live outside of the EU, but I especially appreciate that I can have my OpenPGP key signed by Governikus (a government provider) to prove the key belongs to my name [2].<p>Technically, I should be able to use my certified PGP key to sign documents, but in practice most non techies don't know how to validate my signature. For the average user opening my signed PDF in Adobe Reader, I would need an X.509 certificate from a trusted Certificate Authority for users to see the green check mark.<p>[0] <a href="https://shop.certum.eu/documentsigning-certifcates.html" rel="nofollow">https://shop.certum.eu/documentsigning-certifcates.html</a><p>[1] <a href="https://www.entrust.com/products/electronic-digital-signing" rel="nofollow">https://www.entrust.com/products/electronic-digital-signing</a><p>[2] <a href="https://pgp.governikus.de/wizard/requirements" rel="nofollow">https://pgp.governikus.de/wizard/requirements</a></p>
]]></description><pubDate>Sun, 05 Apr 2026 02:12:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=47645513</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=47645513</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47645513</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Canopy Height Maps v2"]]></title><description><![CDATA[
<p>Fascinating work and inspiring application of the underlying DINOv3 image segmentation model!<p>The blog post and paper [1] describe a promising approach to solving related problems at previously impossible scale and quality: I am currently exploring methods to better represent seasonal land cover changes that would improve wind power generation forecasting and this paper provides a great starting point.<p>I hope DINOv3 can inspire more work like this - and I would encourage any curious mind to play with that model! I was amazed by its capability to distinguish between fine object details. For example, in a photo of a bicycle, the patch embeddings cleanly separated the background from the individual spokes of the wheel.<p>[1] <a href="https://arxiv.org/abs/2603.06382" rel="nofollow">https://arxiv.org/abs/2603.06382</a></p>
]]></description><pubDate>Tue, 17 Mar 2026 06:20:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47409276</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=47409276</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47409276</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Nvidia DGX Spark: Is DGX Spark Blackwell?"]]></title><description><![CDATA[
<p>This article discusses the DGX Spark GB10 GPU architecture from a hardware engineering perspective. The authors explain the trade-offs between datacenter Blackwell and consumer Blackwell chips, list key hardware features that evolved between GPU generations, and highlight some of the challenges that have resulted on the software side with incomplete support from Triton, Flashinfer, and various kernel incompatibilities.</p>
]]></description><pubDate>Fri, 20 Feb 2026 12:32:17 +0000</pubDate><link>https://news.ycombinator.com/item?id=47087205</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=47087205</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47087205</guid></item><item><title><![CDATA[Nvidia DGX Spark: Is DGX Spark Blackwell?]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.backend.ai/blog/2026-02-is-dgx-spark-actually-a-blackwell">https://www.backend.ai/blog/2026-02-is-dgx-spark-actually-a-blackwell</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47087204">https://news.ycombinator.com/item?id=47087204</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Fri, 20 Feb 2026 12:32:17 +0000</pubDate><link>https://www.backend.ai/blog/2026-02-is-dgx-spark-actually-a-blackwell</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=47087204</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47087204</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Show HN: Fashion Shopping with Nearest Neighbors"]]></title><description><![CDATA[
<p>This is great! I've forwarded the site to my wife.<p>Would you mind sharing how you trained the model to produce the vectors? Are you using a vision transformer under the hood with contrastive training against price, product category, etc.?<p>EDIT: I see that the training script is included in the repo and you are using a CNN. Inspiring work!</p>
]]></description><pubDate>Sat, 15 Mar 2025 16:31:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43373523</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=43373523</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43373523</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "How we used GPT-4o for image detection with 350 similar illustrations"]]></title><description><![CDATA[
<p>Makes sense. My main takeaway from the ColPali paper (and your comments) is that ColPali works best for document RAG, whereas vision model embeddings are best used for image similarity search or sentiment analysis. So to answer my own question: The best model to use depends on the application.</p>
]]></description><pubDate>Tue, 14 Jan 2025 19:56:32 +0000</pubDate><link>https://news.ycombinator.com/item?id=42702929</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=42702929</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42702929</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "How we used GPT-4o for image detection with 350 similar illustrations"]]></title><description><![CDATA[
<p>Thanks for the link to the ColPali implementation - interesting! I am specifically interested in evaluation benchmarks for different image embedding models.<p>I see the ColiVara-Eval repo in your link. If I understand correctly, ColQwen2 is the current leader followed closely by ColPali when applying those models for RAG with documents.<p>But how do those models compare to each other and to the llama3.2-vision embeddings when applied to, for example, sentiment analysis for photos? Do benchmarks like that exist?</p>
]]></description><pubDate>Tue, 14 Jan 2025 17:24:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=42700464</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=42700464</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42700464</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "How we used GPT-4o for image detection with 350 similar illustrations"]]></title><description><![CDATA[
<p>Yes, you could implement image similarity search using embeddings: Create embeddings for the entire image set, save the embeddings in a database, and add embeddings incrementally as new images come in. To search for a similar image, create the embedding for the image that you are looking for and compute the cosine similarity between that embedding and the embeddings in your database. The closer the cosine similarity is to 1.0 the more similar the images.<p>For choosing a model, the article mentions the AWS Titan multimodal model, but you’d have to pay for API access to create the embeddings. Alternatively, self-hosting the CLIP model [0] to create embeddings would avoid API costs.<p>Follow-up question: Would the embeddings from the llama3.2-vision models be of higher quality (contain more information) than the original CLIP model?<p>The llama vision models use CLIP under the hood, but they add a projection head to align with the text model and the CLIP weights are mutated during alignment training, so I assume the llama vision embeddings would be of higher quality, but I don’t know for sure. Does anybody know?<p>(I would love to test this quality myself but Ollama does not yet support creating image embeddings from the llama vision models - a feature request with several upvotes has been opened [1].)<p>[0] <a href="https://github.com/openai/CLIP">https://github.com/openai/CLIP</a><p>[1] <a href="https://github.com/ollama/ollama/issues/5304">https://github.com/ollama/ollama/issues/5304</a></p>
]]></description><pubDate>Tue, 14 Jan 2025 16:14:47 +0000</pubDate><link>https://news.ycombinator.com/item?id=42699117</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=42699117</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42699117</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Bureaucrat Mode"]]></title><description><![CDATA[
<p>This article fundamentally misunderstands the role and purpose of bureaucracy.<p>Bureaucracy is tool to manage large, complex, and heterogeneous systems. Ideally, efficient and effective bureaucracy goes unnoticed. Why can I plug my laptop into the power outlet anywhere in Miami or in Vancouver and it just works? Why can I drive on the right side of the road from Toronto to San Diego and be reasonably sure that everyone else will drive on the right side as well?<p>Because humans have self-organized into a multitude of governments, standards organizations, and corporations that all align to produce the same shape of power plug and teach compatible rules-of-the-road across vast geographical distances and unrelated communities. Without bureaucracy, we humans would not be capable of building a global society.<p>Pointing to broken, ineffective, and inefficient processes to scapegoat “the bureaucrat” reveals an ignorance of the underlying mechanisms that make human society function.<p>EDIT:
To those of you downvoting this comment, please let me elaborate.<p>I am tired of the trope of the lazy bureaucrat because I refuse to believe that inefficient government and corporations are inevitable.<p>I do believe that we must strive for efficient and effective government to improve our society because the potential benefits are immense.<p>Those improvements must be driven by competent and qualified leaders who understand and foster the advantages that result from collaboration, communication, and making choices that benefit society as whole.<p>A failure of bureaucracy is a failure of leadership!</p>
]]></description><pubDate>Fri, 04 Oct 2024 23:09:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=41746365</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=41746365</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=41746365</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "IBDNS: Intentionally Broken DNS server"]]></title><description><![CDATA[
<p>The purpose of this tool is testing if a domain name system follows (or does not follow) the correct specifications:<p><i>IBDNS fills a gap in the universe of DNS test tools by offering the possibility of deviating intentionally and on demand from the DNS specifications, and thus simulating incorrect behaviour of authoritative name servers.</i></p>
]]></description><pubDate>Wed, 29 May 2024 16:42:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=40513949</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=40513949</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=40513949</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Live dashboard of carbon dioxide removal purchases"]]></title><description><![CDATA[
<p>Voluntary carbon markets typically trade in both avoidance and removal (i.e. sequestration) credits. Since this site is called Carbon Dioxide Removal I would assume listed trades only cover sequestration, and my cursory review of the listed removal methods appears to confirm that only sequestered carbon trading is listed (but the methods section does not state this explicitly).<p>Regarding "is it paying for stuff that would have happened anyway, or is it somehow net removal?": One of the requirements for generating carbon credits is additionality, i.e. a project should only receive carbon credits if it were not viable without the revenue from those credits. But as you point out, determining additionality is rather difficult and often fuzzy.</p>
]]></description><pubDate>Sun, 04 Jun 2023 22:49:28 +0000</pubDate><link>https://news.ycombinator.com/item?id=36190302</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=36190302</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36190302</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Live dashboard of carbon dioxide removal purchases"]]></title><description><![CDATA[
<p>Fantastic! This site provides an intuitive overview of carbon dioxide removal purchases sourced from six different market places and two registries. Metrics include carbon credit sales and deliveries, prices, names of suppliers and purchasers, and the method of carbon dioxide removal (e.g. direct air capture or biochar production).<p>I find this site useful to get an overview of the development of voluntary carbon markets and their recent rapid expansion. Voluntary carbon markets are fractured into several market places and registries, so getting an aggregate overview over all markets was somewhat difficult prior to discovering this site. Thank you for the submission!</p>
]]></description><pubDate>Sun, 04 Jun 2023 22:27:30 +0000</pubDate><link>https://news.ycombinator.com/item?id=36190180</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=36190180</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36190180</guid></item><item><title><![CDATA[Using iPhone 12 Lidar to Map Two Unconnected Underground Spaces]]></title><description><![CDATA[
<p>Article URL: <a href="https://www.youtube.com/watch?v=YOuYuL8OJDg">https://www.youtube.com/watch?v=YOuYuL8OJDg</a></p>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=31623346">https://news.ycombinator.com/item?id=31623346</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 04 Jun 2022 18:01:28 +0000</pubDate><link>https://www.youtube.com/watch?v=YOuYuL8OJDg</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=31623346</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31623346</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Analysis of noise on construction sites of high-rise buildings"]]></title><description><![CDATA[
<p>This paper analyzes measurements of sound pressure levels on construction sites in Brazil to argue for better worker protection. The measurements provide valuable evidence to support better worker protection, but the manuscript would benefit from a more comprehensive discussion of the effectiveness of control measures.<p>For example, the authors suggest rearranging machinery to limit the number of workers exposed to their noise, but I would have liked to read at least a cursory analysis of how much such measures can reduce noise, and whether administrative or engineering solutions can sufficiently meet safe noise thresholds. If such analysis wasn't conducted within the scope of this work, then literature review could inform such discussion.<p>Edit: added lit. review recommendation</p>
]]></description><pubDate>Sun, 08 May 2022 23:58:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=31309154</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=31309154</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=31309154</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "As Texas Went Dark, the State Paid Natural-Gas Companies to Go Offline"]]></title><description><![CDATA[
<p>Does the article mention what type of natural gas facilities were shut down?<p>Large loads (compressors, pumps) usually consume a share of the gas they transport because gas is so much lower cost than electricity. I wasn't aware that any facilities have a large enough electrical load to become part of the voluntary load shedding program.</p>
]]></description><pubDate>Thu, 20 May 2021 03:45:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=27217144</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=27217144</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=27217144</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Daniel Stenberg (curl) has been denied entry to the US for 870 days"]]></title><description><![CDATA[
<p>According to IRCC, 80% of Express Entry applications for permanent residency are approved on less than 6 months [1].<p>There are many different residency programs, some of which take longer than others, but the Express Entry program is probably the default option for most HN readers.<p>[1] <a href="https://www.canada.ca/en/immigration-refugees-citizenship/services/application/check-processing-times.html" rel="nofollow">https://www.canada.ca/en/immigration-refugees-citizenship/se...</a> economic immigration > skilled worker (federal) > I haven't applied yet</p>
]]></description><pubDate>Fri, 04 Sep 2020 00:50:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=24370667</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=24370667</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=24370667</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Multiple service providers are blocking DuckDuckGo in India"]]></title><description><![CDATA[
<p>I am also in Canada and cannot load DDG via my university's vpn, but DDG works fine on my home internet connection.</p>
]]></description><pubDate>Wed, 01 Jul 2020 18:33:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=23704819</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=23704819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23704819</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Presentation Rules"]]></title><description><![CDATA[
<p>I believe point 12 advises to avoid slides that try to impress by obscure complexity.<p>An offending example might be a slide showing complicated equations that remain unexplained by the presenter. I've observed many presentations where such slides are introduced as "this is the equation  used to derive value X from earlier, but I am just going to move on to the next slide..."<p>edit: format</p>
]]></description><pubDate>Sat, 06 Jun 2020 03:48:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=23436658</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=23436658</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23436658</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Next-generation solar cells pass strict international tests"]]></title><description><![CDATA[
<p>OP is probably correct, maybe even high in their 10 to 20 W/m^2 estimated average output for car mounted PV systems.<p>Rooftop mounted, optimally oriented PV systems at Seattle latitude have an annual capacity factor of ~ 14%. That means a 1 kW system will average 140 W output over a year. That system has a module  area of about 4-7 m^2, equivalent to the area available on a sedan.<p>On a car that has suboptimal module orientation and solar exposure, "power density" (RE: Smil) is really low.</p>
]]></description><pubDate>Sat, 23 May 2020 07:42:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=23280839</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=23280839</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23280839</guid></item><item><title><![CDATA[New comment by ResearchAtPlay in "Next-generation solar cells pass strict international tests"]]></title><description><![CDATA[
<p>Efficiency improvements amplify cost reduction per kWh electricity: An ever growing share of PV system costs stems from every item that is not a a module (inverter, cables, labour etc). Modules are becoming cheaper faster than other components, so reducing module costs further has diminishing returns.<p>In contrast, taking your module efficiency from 20% to 21% increases electricity generation by 5% and thus reduces costs per kWh by 5%.</p>
]]></description><pubDate>Sat, 23 May 2020 07:03:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=23280659</link><dc:creator>ResearchAtPlay</dc:creator><comments>https://news.ycombinator.com/item?id=23280659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=23280659</guid></item></channel></rss>