<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: Mockapapella</title><link>https://news.ycombinator.com/user?id=Mockapapella</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Mon, 06 Apr 2026 00:52:44 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=Mockapapella" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by Mockapapella in "Ask HN: What Are You Working On? (March 2026)"]]></title><description><![CDATA[
<p>I'm working on a TUI-based agent orchestrator called Tenex: <a href="https://github.com/Mockapapella/tenex" rel="nofollow">https://github.com/Mockapapella/tenex</a><p>It's gone a long way to solve the "review" bottleneck people have been experiencing (though admittedly it doesn't fix all of it), and I'm in the process of adding support for Mac and Windows (WSL for now, native some other time).<p>Some of the features I've had for a while, like multi-project agent worktrees, have been added as a part of the Codex App, so it's good to see that this practice is proliferating because it makes it so much easier to manage the clusterf** that is managing 20+ agents at once without it.<p>I'm feeling the itch to have this working on mobile as well so I might prioritize that, and I'm planning to have a meta-agent that can talk to Tenex over some kind of API via tool calls so you can say things like "In project 2, spawn 5 agents, 2 codex, 2 claude, 1 kimi, use 5.2 and 5.4 for codex, use Opus for the claudes, and once kimi is finished launch 10 review agents on its code".</p>
]]></description><pubDate>Mon, 09 Mar 2026 02:08:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=47304057</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=47304057</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47304057</guid></item><item><title><![CDATA[New comment by Mockapapella in "Threads edges out X in daily mobile users, new data shows"]]></title><description><![CDATA[
<p>I have a small-medium following on Threads in the AI/tech space (~7K) and regularly post there, and have started posting on Twitter a little more recently, so I feel like I can provide some extra insight that might be missing in this thread.<p>The exposure to "what's what" in the tech space is clearly better on Twitter and it isn't even close. Nearly all tech news breaks on Twitter first, then flows downstream to Threads. For everything else it's kind of hard to say because I aggressively curate my social media feeds, so I don't get much content outside of my bubble.<p>The tech information I tend to get on Threads is more personal updates on mutuals' projects and niche eureka moments they have. There's maybe a dozen of these that I regularly see and interact with and maybe a couple dozen more that pop up occasionally. But again, this is after aggressively curating my feed and maintaining it for ~3 years.<p>I have a feeling that my efforts could have yielded better results on Twitter had I spent all that time posting and interacting there instead of Threads (or in addition to), hence me increasing my posting there.</p>
]]></description><pubDate>Mon, 19 Jan 2026 23:53:15 +0000</pubDate><link>https://news.ycombinator.com/item?id=46686218</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=46686218</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46686218</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: What Are You Working On? (December 2025)"]]></title><description><![CDATA[
<p><a href="https://github.com/Mockapapella/tenex" rel="nofollow">https://github.com/Mockapapella/tenex</a><p>Tenex, a TUI for managing swarms of AI agents.<p>I noticed that as I'm using agents more and more my PRs are getting more ambitious (read: bigger diffs), and when I was reviewing them with agents I noticed that the first review wouldn't catch anything but the second would. This decreased my confidence in their capabilities, so I decided to make a tool to let me run 10 review agents at once, then aggregate their findings into a single agent to asses and address.<p>I was using Codex at the time, so Tenex is kind of a play on "10 Codex agents" and the "10x engineer" meme.<p>I've since added a lot of features and just today got to use it for the first time in a production system. Some rough edges for sure, but as I'm using it any time anything feels "off" or unintuitive I'm taking notes to improve it.<p>Fun fact, on my machine, while launching 50x Claude Code instances very nearly crashes it, I was able to launch 100x Codex instances no problem. I tried 500x but I ran into rate limits before they could all spawn :(</p>
]]></description><pubDate>Mon, 15 Dec 2025 06:32:50 +0000</pubDate><link>https://news.ycombinator.com/item?id=46271120</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=46271120</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46271120</guid></item><item><title><![CDATA[New comment by Mockapapella in "The "confident idiot" problem: Why AI needs hard rules, not vibe checks"]]></title><description><![CDATA[
<p>As far as I can tell they aren't</p>
]]></description><pubDate>Mon, 08 Dec 2025 19:20:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=46196441</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=46196441</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46196441</guid></item><item><title><![CDATA[New comment by Mockapapella in "The "confident idiot" problem: Why AI needs hard rules, not vibe checks"]]></title><description><![CDATA[
<p>I wrote about something like this a couple months ago: <a href="https://thelisowe.substack.com/p/relentless-vibe-coding-part-1" rel="nofollow">https://thelisowe.substack.com/p/relentless-vibe-coding-part...</a>. Even started building a little library to prove out the concept: <a href="https://github.com/Mockapapella/containment-chamber" rel="nofollow">https://github.com/Mockapapella/containment-chamber</a><p>Spoiler: there won't be a part 2, or if there is it will be with a different approach. I wrote a followup that summarizes my experiences trying this out in the real world on larger codebases: <a href="https://thelisowe.substack.com/p/reflections-on-relentless-vibe-coding" rel="nofollow">https://thelisowe.substack.com/p/reflections-on-relentless-v...</a><p>tl;dr I use a version of it in my codebases now, but the combination of LLM reward hacking and the long tail of verfiers in a language (some of which don't even exist! Like accurately detecting dead code in Python (vulture et. al can't reliably do this) or valid signatures for property-based tests) make this problem more complicated than it seems on the surface. It's not intractable, but you'd be writing many different language-specific libraries. And even then, with all of those verifiers in place, there's no guarantee that when working in different sized repos it will produce a consistent quality of code.</p>
]]></description><pubDate>Mon, 08 Dec 2025 16:24:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46194203</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=46194203</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46194203</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ironclad – formally verified, real-time capable, Unix-like OS kernel"]]></title><description><![CDATA[
<p>OK can someone smarter than me educate me?<p>A couple weeks ago I was curious what the strictest programming language was. ChatGPT listed a couple, and it kicked off a short discussion where I began asking it about the capabilities of stricter programming languages at low levels. Funny enough at the end it mentioned that SPARK/Ada was the strictest you could get at the lowest levels, same as Ironclad.<p>At one point while asking it about drivers, it said "ACL2’s logic is [...] side‑effect‑free definitions with termination proofs when admitted to the logic. That is misaligned with effectful, interrupt‑driven kernel code.<p>I'm not an OS or kernel dev, most of my work has been in Web Dev, ML, and a little bit of embedded. How accurate is the information that was presented to me? Here is the link to the discussion: <a href="https://chatgpt.com/share/691012a7-a06c-800f-9cc9-54a7c2c8b640" rel="nofollow">https://chatgpt.com/share/691012a7-a06c-800f-9cc9-54a7c2c8b6...</a><p>I don't know SPARK or Ada, but it just bothers me to think that we can't...I guess...prove everything about our software before we run it (yes yes, I'm familiar with halting problem shenanigans, but other than that).</p>
]]></description><pubDate>Sun, 09 Nov 2025 04:15:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=45862853</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=45862853</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45862853</guid></item><item><title><![CDATA[New comment by Mockapapella in "Getting AI to work in complex codebases"]]></title><description><![CDATA[
<p>This sounds very similar to my workflow. Do you have pre-commits or CI beyond testing? I’ve started thinking about my codebase as an RL environment with the pre-commits as hyperparameters. It’s fascinating seeing what coding patterns emerge as a result.</p>
]]></description><pubDate>Tue, 23 Sep 2025 18:40:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45351102</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=45351102</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45351102</guid></item><item><title><![CDATA[New comment by Mockapapella in "Typed languages are better suited for vibecoding"]]></title><description><![CDATA[
<p>> When it’s able to create code that compiles, the code is invariably inefficient and ugly.<p>Why not have static analysis tools on the other side of those generations that constrain how the LLM can write the code?</p>
]]></description><pubDate>Mon, 04 Aug 2025 04:11:23 +0000</pubDate><link>https://news.ycombinator.com/item?id=44782047</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=44782047</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44782047</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: Freelancer? Seeking freelancer? (August 2025)"]]></title><description><![CDATA[
<p>SEEKING WORK | Remote | AI Infrastructure & Performance<p>Location: Wisconsin<p>---<p>Last September I built an AI inference tool that hit #3 on HN (<a href="https://news.ycombinator.com/item?id=41620530">https://news.ycombinator.com/item?id=41620530</a>). It processed 17.3M messages in 24 hours and only cost $17 to run.<p>I specialize in:<p>- LLM inference optimization (FastAPI, proper batching, memory management)<p>- CI/CD pipelines for ML deployments<p>- Making AI systems cost-effective at scale<p>Recent work: FrankenClaude (reasoning injection experiments, <a href="https://thelisowe.substack.com/p/frankenclaude-injecting-deepseek" rel="nofollow">https://thelisowe.substack.com/p/frankenclaude-injecting-dee...</a>), self driving Rocket League (<a href="https://thelisowe.substack.com/p/building-an-ai-that-plays-rocket" rel="nofollow">https://thelisowe.substack.com/p/building-an-ai-that-plays-r...</a>), diffdev (AI-powered code modification tool, <a href="https://pypi.org/project/diffdev/" rel="nofollow">https://pypi.org/project/diffdev/</a>).<p>Previously at Sprout Social where I built their ML inference platform - reduced deployment time from 6 months to 6 hours and cut AWS costs by $500K/yr.<p>Looking for interesting problems in AI infrastructure, performance optimization, or building products from scratch.<p>Tech: PyTorch, FastAPI, K8s, Docker, AWS, ONNX<p>---<p>Resume: <a href="https://drive.google.com/file/d/1qO8XdisNTFq_wmrQGDKnu6eWDi2g33Me/view" rel="nofollow">https://drive.google.com/file/d/1qO8XdisNTFq_wmrQGDKnu6eWDi2...</a>
GitHub: github.com/Mockapapella
Blog: thelisowe.substack.com<p>Contact: My email is in my bio or on my resume</p>
]]></description><pubDate>Fri, 01 Aug 2025 15:25:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44758227</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=44758227</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44758227</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: Who wants to be hired? (August 2025)"]]></title><description><![CDATA[
<p><p><pre><code>    Location: Wisconsin
    Remote: Yes
    Willing to relocate: Yes
    Technologies: Python, PyTorch, Kubernetes, Docker, AWS, FastAPI, ONNX, MLOps
    Resume: https://drive.google.com/file/d/1qO8XdisNTFq_wmrQGDKnu6eWDi2g33Me/view
    GitHub: https://github.com/Mockapapella
    Email: In profile or on resume
</code></pre>
AI/ML Engineer specializing in high-performance deployments. Built distributed systems handling 30K QPS, developed a neural network for Rocket League gameplay, and created platforms that cut model deployment time from 6 months to 6 hours. Saved $500K/yr in infrastructure costs through optimization at previous role. Former technical founder with experience in humanoid robotics and AI writing assistance. I write about my projects and musings on my blog: <a href="https://thelisowe.substack.com/" rel="nofollow">https://thelisowe.substack.com/</a><p>Seeking roles focusing on ML infrastructure, model optimization, post-training, or full-stack AI engineering.</p>
]]></description><pubDate>Fri, 01 Aug 2025 15:24:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=44758210</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=44758210</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44758210</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: Freelancer? Seeking freelancer? (July 2025)"]]></title><description><![CDATA[
<p>SEEKING WORK | Remote | AI Infrastructure & Performance<p>Location: Wisconsin<p>---<p>Last September I built an AI inference tool that hit #3 on HN (<a href="https://news.ycombinator.com/item?id=41620530">https://news.ycombinator.com/item?id=41620530</a>). It processed 17.3M messages in 24 hours and only cost $17 to run.<p>I specialize in:<p>- LLM inference optimization (FastAPI, proper batching, memory management)<p>- CI/CD pipelines for ML deployments<p>- Making AI systems cost-effective at scale<p>Recent work: FrankenClaude (reasoning injection experiments, <a href="https://thelisowe.substack.com/p/frankenclaude-injecting-deepseek" rel="nofollow">https://thelisowe.substack.com/p/frankenclaude-injecting-dee...</a>), self driving Rocket League (<a href="https://thelisowe.substack.com/p/building-an-ai-that-plays-rocket" rel="nofollow">https://thelisowe.substack.com/p/building-an-ai-that-plays-r...</a>), diffdev (AI-powered code modification tool, <a href="https://pypi.org/project/diffdev/" rel="nofollow">https://pypi.org/project/diffdev/</a>).<p>Previously at Sprout Social where I built their ML inference platform - reduced deployment time from 6 months to 6 hours and cut AWS costs by $500K/yr.<p>Looking for interesting problems in AI infrastructure, performance optimization, or building products from scratch.<p>Tech: PyTorch, FastAPI, K8s, Docker, AWS, ONNX<p>---<p>Resume: <a href="https://drive.google.com/file/d/1qO8XdisNTFq_wmrQGDKnu6eWDi2g33Me/view" rel="nofollow">https://drive.google.com/file/d/1qO8XdisNTFq_wmrQGDKnu6eWDi2...</a>
GitHub: github.com/Mockapapella
Blog: thelisowe.substack.com<p>Contact: My email is in my bio or on my resume</p>
]]></description><pubDate>Sun, 06 Jul 2025 03:54:58 +0000</pubDate><link>https://news.ycombinator.com/item?id=44477693</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=44477693</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44477693</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: Who wants to be hired? (July 2025)"]]></title><description><![CDATA[
<p><p><pre><code>    Location: Wisconsin
    Remote: Yes
    Willing to relocate: Yes
    Technologies: Python, PyTorch, Kubernetes, Docker, AWS, FastAPI, ONNX, MLOps
    Resume: https://drive.google.com/file/d/1qO8XdisNTFq_wmrQGDKnu6eWDi2g33Me/view
    GitHub: https://github.com/Mockapapella
    Email: In profile or on resume
</code></pre>
AI/ML Engineer specializing in high-performance deployments. Built distributed systems handling 30K QPS, developed a neural network for Rocket League gameplay, and created platforms that cut model deployment time from 6 months to 6 hours. Saved $500K/yr in infrastructure costs through optimization at previous role. Former technical founder with experience in humanoid robotics and AI writing assistance. I write about my projects and musings on my blog: <a href="https://thelisowe.substack.com/" rel="nofollow">https://thelisowe.substack.com/</a><p>Seeking roles focusing on ML infrastructure, model optimization, post-training, or full-stack AI engineering.</p>
]]></description><pubDate>Sun, 06 Jul 2025 02:27:16 +0000</pubDate><link>https://news.ycombinator.com/item?id=44477324</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=44477324</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44477324</guid></item><item><title><![CDATA[New comment by Mockapapella in "'I paid for the whole GPU, I am going to use the whole GPU'"]]></title><description><![CDATA[
<p>This is a good article on the "fog of war" for GPU inference. Modal has been doing a great job of aggregating and disseminating info on how to think about high quality AI inference. Learned some fun stuff -- thanks for posting it.<p>> the majority of organizations achieve less than 70% GPU Allocation Utilization when running at peak demand — to say nothing of aggregate utilization. This is true even of sophisticated players, like the former Banana serverless GPU platform, which operated at an aggregate utilization of around 20%.<p>Saw this sort of thing at my last job. Was very frustrating pointing this out to people only for them to respond with ¯\_(ツ)_/¯. I posted a much less tactful article (read: rant) than the one by Modal, but I think it still touches on a lot of the little things you need to consider when deploying AI models: <a href="https://thelisowe.substack.com/p/you-suck-at-deploying-ai-models" rel="nofollow">https://thelisowe.substack.com/p/you-suck-at-deploying-ai-mo...</a></p>
]]></description><pubDate>Wed, 07 May 2025 21:53:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=43920922</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=43920922</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43920922</guid></item><item><title><![CDATA[New comment by Mockapapella in "Launch HN: Exa (YC S21) – The web as a database"]]></title><description><![CDATA[
<p>Holy shit I think that might be it! I have been looking for that tweet for like a year now. Thanks!</p>
]]></description><pubDate>Wed, 07 May 2025 01:27:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=43911355</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=43911355</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43911355</guid></item><item><title><![CDATA[New comment by Mockapapella in "Launch HN: Exa (YC S21) – The web as a database"]]></title><description><![CDATA[
<p>Honestly I thought you guys had launched already (and didn't know you were a part of YC), been aware of you guys for years now it seems. Congrats on the launch! Hope the twitter issues aren't causing you guys too many problems.<p>Normally I'd send this as a DM or email, but I think it could be useful for others to learn about how to use your service/the limitations of it. A couple weeks ago I made a search for:<p><pre><code>    In early 2023, Andrej Karpathy said something like "large training runs are a good test of the overall health of the network." Something something resilience as well I think. I need you to find it.
</code></pre>
Unfortunately it wasn't able to find it, but it was either in a tweet or a really long presentation, neither of which are good targets for search. It was around the same time that this (<a href="https://www.youtube.com/watch?v=c3b-JASoPi0" rel="nofollow">https://www.youtube.com/watch?v=c3b-JASoPi0</a>) video was posted, like within a couple weeks before or after. How could I have improved my query? Does exa work over videos?</p>
]]></description><pubDate>Tue, 06 May 2025 18:54:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43908450</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=43908450</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43908450</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: Who wants to be hired? (May 2025)"]]></title><description><![CDATA[
<p>Thanks, looks interesting. Sent an email</p>
]]></description><pubDate>Sat, 03 May 2025 04:45:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=43876907</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=43876907</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43876907</guid></item><item><title><![CDATA[New comment by Mockapapella in "Suno v4.5"]]></title><description><![CDATA[
<p>Could you elaborate on the instructions in brackets part?</p>
]]></description><pubDate>Fri, 02 May 2025 19:01:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=43873527</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=43873527</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43873527</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: Who wants to be hired? (May 2025)"]]></title><description><![CDATA[
<p><p><pre><code>    Location: Wisconsin
    Remote: Yes
    Willing to relocate: Yes
    Technologies: Python, PyTorch, Kubernetes, Docker, AWS, FastAPI, ONNX, MLOps
    Resume: https://drive.google.com/file/d/1qO8XdisNTFq_wmrQGDKnu6eWDi2g33Me/view
    GitHub: https://github.com/Mockapapella
    Email: In profile or on resume
</code></pre>
AI/ML Engineer specializing in high-performance deployments. Built distributed systems handling 30K QPS, developed a neural network for Rocket League gameplay, and created platforms that cut model deployment time from 6 months to 6 hours. Saved $500K/yr in infrastructure costs through optimization at previous role. Former technical founder with experience in humanoid robotics and AI writing assistance. I write about my projects and musings on my blog: <a href="https://thelisowe.substack.com/" rel="nofollow">https://thelisowe.substack.com/</a><p>Seeking roles focusing on ML infrastructure, model optimization, post-training, or full-stack AI engineering.</p>
]]></description><pubDate>Fri, 02 May 2025 02:30:38 +0000</pubDate><link>https://news.ycombinator.com/item?id=43865583</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=43865583</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43865583</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: Who wants to be hired? (April 2025)"]]></title><description><![CDATA[
<p><p><pre><code>    Location: Wisconsin
    Remote: Yes
    Willing to relocate: Yes
    Technologies: Python, PyTorch, Kubernetes, Docker, AWS, FastAPI, ONNX, MLOps
    Resume: https://drive.google.com/file/d/1qO8XdisNTFq_wmrQGDKnu6eWDi2g33Me/view
    GitHub: https://github.com/Mockapapella
    Email: In profile or on resume
</code></pre>
AI/ML Engineer specializing in high-performance deployments. Built distributed systems handling 30K QPS, developed a neural network for Rocket League gameplay, and created platforms that cut model deployment time from 6 months to 6 hours. Saved $500K/yr in infrastructure costs through optimization at previous role. Former technical founder with experience in humanoid robotics and AI writing assistance. I write about my projects and musings on my blog: <a href="https://thelisowe.substack.com/" rel="nofollow">https://thelisowe.substack.com/</a><p>Seeking roles focusing on ML infrastructure, model optimization, post-training, or full-stack AI engineering.</p>
]]></description><pubDate>Tue, 01 Apr 2025 16:23:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=43548635</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=43548635</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43548635</guid></item><item><title><![CDATA[New comment by Mockapapella in "Ask HN: Who wants to be hired? (March 2025)"]]></title><description><![CDATA[
<p><p><pre><code>    Location: Wisconsin
    Remote: Yes
    Willing to relocate: Yes
    Technologies: Kubernetes, Docker, ONNX, PyTorch, FastAPI, Django, Python, JavaScript, Postgres, MySQL, AWS, DigitalOcean, Grafana, Redis/Valkey, RunPod. More on resume
    Resume: https://drive.google.com/file/d/1etR881RHFsK7-NMEOrZP-bhEb3g6SZPh/view?usp=sharing
    Email: In profile or on resume
</code></pre>
LinkedIn: <a href="https://www.linkedin.com/in/quintenlisowe/" rel="nofollow">https://www.linkedin.com/in/quintenlisowe/</a><p>My expertise is in deploying AI models and the infra that supports them, but recently I've been pushing myself to learn more about the actual creation and training process of AI models.<p>I've built distributed AI systems that can support 35,000 customers inferencing at 30,000 queries per second using XLM-RoBERTa and GPU nodes. A few months ago I made a sentiment classification tool that got to the front of HN (<a href="https://news.ycombinator.com/item?id=41620530">https://news.ycombinator.com/item?id=41620530</a>), and once I noticed it was starting to gain traction I stayed up until 3AM fixing the bugs people were commenting about because I cared about them having the best experience possible. I've founded some companies before (Humanoid robotics and AI) and have leadership experience from when I was the lead mentor for a robotics team.<p>I recently built a neural net architecture from the ground up to play Rocket League: <a href="https://github.com/Mockapapella/RLAI">https://github.com/Mockapapella/RLAI</a>. Code, training data, and trained model are all there. I've been writing up a walkthrough of the codebase that I'll be posting to my substack when it's done (<a href="https://thelisowe.substack.com/" rel="nofollow">https://thelisowe.substack.com/</a>).</p>
]]></description><pubDate>Mon, 03 Mar 2025 17:11:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=43244075</link><dc:creator>Mockapapella</dc:creator><comments>https://news.ycombinator.com/item?id=43244075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43244075</guid></item></channel></rss>