<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: nickpsecurity</title><link>https://news.ycombinator.com/user?id=nickpsecurity</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Tue, 14 Apr 2026 10:34:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=nickpsecurity" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by nickpsecurity in "Ask HN: What Are You Working On? (April 2026)"]]></title><description><![CDATA[
<p>I was bothered by heavy polarization of Americans, individually and even in churches, that appears to ve driven mostly by media outlets who cherry pick and lie. The Left and Right report specific events so differently that their readers might as well live in different worlds. People need to ditch those sources where possible. If not, they need to have a mix of them while understanding their biases.<p>Originally for churches, my draft article below describes how this problem affects all individuals and institutions. I recommend solutions which include AllSides.com (amazing!) and search engines for retrieving news from multiple outlets. I have a prototype. Progress is slow on my tool because I work two jobs with my free time mostly going to ministry serving Christ and others.<p><a href="https://heswithjesus.com/mediabias.html" rel="nofollow">https://heswithjesus.com/mediabias.html</a><p>I haven't finished reviewing and adding Drooid yet. I'll still link it because it's a good idea:<p><a href="https://apps.apple.com/us/app/drooid-news-from-all-sides/id6593684010">https://apps.apple.com/us/app/drooid-news-from-all-sides/id6...</a><p><a href="https://play.google.com/store/apps/details?id=social.drooid&hl=en-US">https://play.google.com/store/apps/details?id=social.drooid&...</a><p>(Note: I'm not affiliated with or paid by any of these companies. I am a paying supported of AllSides because I believe they'll do a lot of good.)</p>
]]></description><pubDate>Mon, 13 Apr 2026 15:25:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=47753377</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47753377</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47753377</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Small models also found the vulnerabilities that Mythos found"]]></title><description><![CDATA[
<p>We've always had good tools for program analysis and testing. They're usually exhorbitantly expensive.<p>I'm hoping the good results with AI models drive down the prices of traditional tools. Then, we can train open models to integrate with them.</p>
]]></description><pubDate>Sun, 12 Apr 2026 02:25:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47735656</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47735656</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47735656</guid></item><item><title><![CDATA[New comment by nickpsecurity in "How NASA built Artemis II’s fault-tolerant computer"]]></title><description><![CDATA[
<p>The ARINC scheduler, RTOS, and redundancy have been used in safety-critical for decades. ARINC to the 90's. Most safety-critical microkernels, like INTEGRITY-178B and LynxOS-178B, came with a layer for that.<p>Their redundancy architecture is interesting. I'd be curious of what innovations went into rad-hard fabrication, too. Sandia Secure Processor (aka Score) was a neat example of rad-hard, secure processors.<p>Their simulation systems might be helpful for others, too. We've seen more interest in that from FoundationDB to TigerBeetle.</p>
]]></description><pubDate>Fri, 10 Apr 2026 03:28:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=47713261</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47713261</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47713261</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Why domain specific LLMs won't exist: an intuition"]]></title><description><![CDATA[
<p>re llm I linked<p>It's designed for drafting legal documents for lawyers. It's pretrained on a ton of court documents.<p>re why generalists are better<p>Much knowledge we have builds on prior knowledge. The prior knowledge is often reused across domains. Analogous reasoning, important in creativity, also connects facts or heuristics across different domains. Also, just being better at English.<p>If training a coding LLM, it needs to understand English, any concepts you type in, intrinsic knowledge about your problems, general heuristics for problem solving, and code with has comments and issues. The comments and issues might contain or need any of the above.<p>That's why I believe generalist LLM's further trained on code work better than LLM's trained only on code.</p>
]]></description><pubDate>Mon, 06 Apr 2026 14:29:13 +0000</pubDate><link>https://news.ycombinator.com/item?id=47661407</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47661407</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47661407</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Vulnerability research is cooked"]]></title><description><![CDATA[
<p>I hastily wrote that. I probably should've said high-performance, system languages that can be made safely and turned into a single executable. Preferrably with good support for parallelism and concurrency. That's mostly Rust or safe subsets of C and C++ with static analysis.<p>Python can do the algorithms. It's quick to develop and debug. There's tons of existing code in data science and ML fields. It's worse in the other areas I mentioned, though.<p>So, a transpiler that generated Rust or safe C/C++ from legacy and AI-generated Python could be a potent combination. What do you think about that?</p>
]]></description><pubDate>Sun, 05 Apr 2026 14:38:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649931</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47649931</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649931</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Ubuntu now requires more RAM than Windows 11"]]></title><description><![CDATA[
<p>Thanks for the critiques and the tips. I might try that in future testing.</p>
]]></description><pubDate>Sun, 05 Apr 2026 14:33:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649883</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47649883</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649883</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Ubuntu now requires more RAM than Windows 11"]]></title><description><![CDATA[
<p>That's a terrific idea. It might address the other problem that I'd have little space for Linux apps. Thanks!</p>
]]></description><pubDate>Sun, 05 Apr 2026 14:28:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649819</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47649819</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649819</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Why domain specific LLMs won't exist: an intuition"]]></title><description><![CDATA[
<p>We're already using domain-specific LLM's. The only LLM trained lawfully that I know of, KL3M, is also domain-specific. So, the title is already wrong.<p><a href="https://www.kl3m.ai/" rel="nofollow">https://www.kl3m.ai/</a><p>Author is correct that intelligence is compounding. That's why domain-specific models are usually general models converted to domain-specific models by continued pretraining. Even general models, like H20's, have been improved by constraining them to domain-supporting, general knowledge in a second phase of pretraining. But, they're eventually domain specific.<p>Outside LLM's, I think most models are domain-specific: genetics, stock prices, ECG/EKG scans, transmission shifying, seismic, climate, etc. LLM's trying to do everything are an exception to the rule that most ML is domain-specific.</p>
]]></description><pubDate>Sun, 05 Apr 2026 14:23:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=47649760</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47649760</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47649760</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Ubuntu now requires more RAM than Windows 11"]]></title><description><![CDATA[
<p>I was testing them on a HP laptop I bought for $200 with 4GB of RAM.<p>Windows, its default, used so much memory that there was not much left for apps.<p>Ubuntu used 500MB less than Windows in system monitor. I think it was still 1GB or more. It also appeared to run more slowly than it used to on older hardware.<p>Lubuntu used hundreds of MB less than Ubuntu. It could still run the same apps but had less features in UI (eg search). It ran lightening fast with more, simultaneous apps.<p>(Note: That laptop's Wifi card wouldn't work with any Linux using any technique I tried. Sadly, I had to ditch it.)<p>I also had Lubuntu on a 10+ year old Thinkpad with an i7 (2nd gen). It's been my daily machine for a long time. The newer, USB installers wouldn't work with it. While I can't recall the specifics, I finally found a way to load an Ubuntu-like interface or Ubuntu itself through the Lubuntu tech. It's now much slower but still lighter than default Ubuntu or Windows.<p>(Note: Lubuntu was much lighter and faster on a refurbished Dell laptop I tested it on, too.)<p>God blessed me recently by a person who outright gave me an Acer Nitro with a RTX and Windows. My next step is to figure out the safest way to dual boot Windows 11 and Linux for machine learning without destroying the existing filesystem or overshrinking it.</p>
]]></description><pubDate>Sun, 05 Apr 2026 12:42:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=47648815</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47648815</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47648815</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Google releases Gemma 4 open models"]]></title><description><![CDATA[
<p>Larger models better understand and reproduce what's in their training set.<p>For example, I used to get verbatim quotes and answers from copyrighted works when I used GPT-3.5. That's what clued me in to the copyright problem. Whereas, the smallest models often produced nonsense about the same topics. Because small models often produce nonsense.<p>You might need to do a new test each time to avoid your old ones being scraped into the training sets. Maybe a new one for each model produced after your last one. Totally unrelated to the last one, too.</p>
]]></description><pubDate>Thu, 02 Apr 2026 22:51:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=47621209</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47621209</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47621209</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Vulnerability research is cooked"]]></title><description><![CDATA[
<p>The silo'd codebases I was referring to are verification tools they produce. They're used to prevent attacks. Each tool has one or more capabilities others lack. If combined, they'd catch many problems.<p>Examples: KLEE test generator; combinatorial or path-bases testing; CPAChecker; race detectors for concurrency; SIF information flow control; symbolic execution; Why3 verifier which commercial tools already build on.</p>
]]></description><pubDate>Tue, 31 Mar 2026 17:23:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=47590647</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47590647</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47590647</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Vulnerability research is cooked"]]></title><description><![CDATA[
<p>They're great at Python and Javascript which have lots of tooling. My idea was to make X-to-safe-lang translators, X initially being Python and Javascript. Let the tools keep generating what they're good at. The simpler translators make it safe and fast.<p>If translated to C or Java, we can use decades worth of tools for static analysis and test generation. While in Python and Javascript, it's easier to analyze and live debug by humans.<p>Multiple wins if the translators can be built.</p>
]]></description><pubDate>Tue, 31 Mar 2026 03:01:10 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582269</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47582269</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582269</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Vulnerability research is cooked"]]></title><description><![CDATA[
<p>You've never seen the full power of static analysis, dynamic analysis, and test generation. The best examples were always silo'd, academic codebases. If they were combined, and matured, the results would be amazing. I wanted to do that back when I was in INFOSEC.<p>That doesn't even account for lightweight, formal methods. SPARK Ada, Jahob verification system with its many solvers, Design ny Contract, LLM's spitting this stuff out from human descriptions, type systems like Rust's, etc. Speed run (AI) producing those with unsafe stuff checked by the combo of tools I already described.</p>
]]></description><pubDate>Tue, 31 Mar 2026 02:56:35 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582237</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47582237</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582237</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Vulnerability research is cooked"]]></title><description><![CDATA[
<p>Also, synthetic data and templates to help them discover new vulnerabilities or make agents work on things they're bad at. They differentiate with their prompts or specialist models.<p>Also, like ForAllSecure's Mayhem, I think they can differentiate on automatic patching that's reliable and secure. Maybe test generation, too, that does full coverage. They become drive by verification and validation specialists who also fix your stuff for you.</p>
]]></description><pubDate>Tue, 31 Mar 2026 02:49:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=47582193</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47582193</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47582193</guid></item><item><title><![CDATA[New comment by nickpsecurity in "LLMs Do Not Grade Essays Like Humans"]]></title><description><![CDATA[
<p>I agree with you on how their quality is spread out. But, this...<p>"School-type long essays only seem to exist in academia."<p>Does an AI know what an essay is? Would it consider any long, descriptive post an essay? Especially if pretraining data has many people describing long posts as essays or "essay-like?" Or only actual essays? And what is an actual essay again?<p>I think AI's might have different interpretations due to the above questions. They might also conflate essays with longer, detailed, or argumentative posts. We'd have to put a bunch of posts into a bunch of AI's to ask how they classify them.</p>
]]></description><pubDate>Sun, 29 Mar 2026 22:48:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=47568253</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47568253</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47568253</guid></item><item><title><![CDATA[New comment by nickpsecurity in "LLMs Do Not Grade Essays Like Humans"]]></title><description><![CDATA[
<p>I feel like this is actually human-like but like the average human in the pretraining data. Let's look:<p>1. They reward short or under-developed essays. I'd say most online content, especially with high upvotes next to the post, fits that. Social media surely does.<p>2. If it's longer posts, the system starts nitpicking it on minor details, like grammar. We see this even on Hacker News, a community valuing quality, with some longer submissions. It's also a debate tactic to derail opponents' better arguments in many discussions which are in their pretraining data.<p>3. Essays with more praise get higher scores and with more criticism get lower scores. "Get on the Bandwagon" Effect. Echo chambers. One person writes a thing followed by 5-20 people confirming it. That's probably in the pretraining data. It might survive some filtering/cleaning strategies, too.<p>So, no, I think these AI's are acting way too human. They need to fine-tune them to act like more, reasonable humans. That will initially take RLHF data for many types of situations. Given pretraining bias, they might also have to train them to drop the bad habits the article mentions.</p>
]]></description><pubDate>Sun, 29 Mar 2026 17:35:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=47565278</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47565278</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47565278</guid></item><item><title><![CDATA[New comment by nickpsecurity in "CERN uses ultra-compact AI models on FPGAs for real-time LHC data filtering"]]></title><description><![CDATA[
<p>That we're building theories on what's left of mostly-trashed data has scientific implications. Most people hearing LHC proved something probably didn't think a preprocessor threw away most observations first. That layer of interpretation could cause errors.<p>I wonder how much independent review went into that step.</p>
]]></description><pubDate>Sun, 29 Mar 2026 03:51:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=47560254</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47560254</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47560254</guid></item><item><title><![CDATA[New comment by nickpsecurity in "CERN uses ultra-compact AI models on FPGAs for real-time LHC data filtering"]]></title><description><![CDATA[
<p>It's a discussion forum. Saying people are all wrong with no proof comes off as arrogant but isn't helpful. If you have links to examples, you can simply say, "Here's some prior art or previous work in this area you all might like."<p>People would probably upvote that.</p>
]]></description><pubDate>Sun, 29 Mar 2026 03:48:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=47560238</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47560238</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47560238</guid></item><item><title><![CDATA[New comment by nickpsecurity in "JD Vance Tells Me That UFOs Are Demons"]]></title><description><![CDATA[
<p>Christians usually only believe in God, angels, humans, and animals. That would mean intelligent UFO's might be angels or demons. While that's speculation, one guy did an interesting test of it.<p>Non-believers are <i>much</i> more vulnerable to demonic activity than believers. There's also a goal where distracting them from Christ is all they nerd to stay on the road to Hell. So, UFO sightings should be much higher in areas with non-believers than areas with Christians. He shared his data here:<p><a href="https://web.archive.org/web/20220521004104/https://www.godandscience.org/doctrine/ufo_existence_of_demons.html" rel="nofollow">https://web.archive.org/web/20220521004104/https://www.godan...</a><p>(Note: I haven't peer reviewed his methodology.)</p>
]]></description><pubDate>Sat, 28 Mar 2026 02:20:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=47550895</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47550895</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47550895</guid></item><item><title><![CDATA[New comment by nickpsecurity in "Desk for people who work at home with a cat"]]></title><description><![CDATA[
<p>Nah, they like to lay on the laptop to eliminate their competition. They want all our attention.</p>
]]></description><pubDate>Fri, 27 Mar 2026 16:41:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=47545026</link><dc:creator>nickpsecurity</dc:creator><comments>https://news.ycombinator.com/item?id=47545026</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47545026</guid></item></channel></rss>