<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: iknownthing</title><link>https://news.ycombinator.com/user?id=iknownthing</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Thu, 09 Apr 2026 12:10:40 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=iknownthing" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Ask HN: Is Prompt Engineering Just Overfitting?]]></title><description><![CDATA[
<p>Whenever I see people doing prompt engineering they start with some kind of evaluation dataset, then they refine their prompt to perform well on that evaluation dataset.  But isn't this just like training on a test dataset i.e. overfitting?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44494709">https://news.ycombinator.com/item?id=44494709</a></p>
<p>Points: 3</p>
<p># Comments: 1</p>
]]></description><pubDate>Mon, 07 Jul 2025 21:09:54 +0000</pubDate><link>https://news.ycombinator.com/item?id=44494709</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=44494709</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44494709</guid></item><item><title><![CDATA[Ask HN: Can LLMs do batch classification?]]></title><description><![CDATA[
<p>I wrote a prompt that did batch classification - the prompt contained instructions on how to classify text and 10 input examples for it to classify and it was to return a json string displaying the classifications.  It kind of worked by then I realized the individual classification of an individual input was significantly affected by which other 9 inputs it was in the prompt with.  In other works the classification was not at all independent.  With traditional ML you can do batch classification trivially with each input in the batch being predicted independently.  So is this just a limitation of LLMs?  You have to classify inputs one LLM call at a time?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44383356">https://news.ycombinator.com/item?id=44383356</a></p>
<p>Points: 2</p>
<p># Comments: 2</p>
]]></description><pubDate>Thu, 26 Jun 2025 01:16:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=44383356</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=44383356</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44383356</guid></item><item><title><![CDATA[Show HN: HinterviewGPT – a LeetCode alternative with a built-in AI tutor]]></title><description><![CDATA[
<p>Hello HN!<p>I've been an active Leetcode user for a long time and while I think it's a great platform to test your skills, I've never really considered it a good resource for learning.  Whenever I'd get stuck on a question I'd just look up the answer then try to understand it via reverse-engineering which never seemed very efficient to me.  That's why I built HinterviewGPT.<p>HinterviewGPT started as a side project where my initial idea was to build a UI that looked a lot like leetcode with the question on the left and the editor on the top right, but have a chat UI on the bottom right where you could chat with an LLM-based "tutor".  The "tutor" would be aware of the question and your current solution in the editor and it would be instructed to answer your questions when you are stuck by giving hints and specifically NOT instructed to give you the answer (otherwise it wouldn't be a very good tutor).<p>I built this side project and it worked surprisingly well.  The tutor's hints guided me towards the correct answer conversationally rather than me having to look up the correct answer when I got stuck.  This really made my learning more efficient and cut down on my study time per question.  It also promotes "active learning" (learning through hints) rather than passive learning (trying to reverse-engineer the solution) which I think made the learning stick more.  As a result I decided to build this into an actual product.<p>For the product I added a "question generation" feature.  Via another chat UI you can describe the kind of question you want to practice (based on industry, role, topic, job req, etc.) and HinterviewGPT will generate a question for you which you could then practice.  So you don't only have to study leetcode questions with HinterviewGPT, you can really study any kind of interview question.   You can also study both code-based and text-based (e.g. behavioral, system design, etc.) questions.  Or if you have a particular question in mind that you want to study, you can manually enter the question as well.   Once you're done practicing the question you can submit it for a final evaluation and your solutions and evaluations are saved for future reference (again, like leetcode).<p>Here's a demo of the full workflow i.e. question generation, practicing with tutor, submitting your solution:  <a href="https://www.youtube.com/watch?v=Yj6qvEQYWi0" rel="nofollow">https://www.youtube.com/watch?v=Yj6qvEQYWi0</a><p>All the LLM-related features use OpenAI models under the hood specifically gpt-4o-mini, gpt-4o, o1-mini, o3-mini.  You can choose when to use which model.  gpt-4o-mini does suprisingly well in a lot of cases but obviously for more complex questions the more advanced models are preferable.<p>HinterviewGPT offers a free trial (with only gpt-4o-mini is available unfortunately).<p>Note: don't take this as me endorsing leetcode-style interviews, I actually don't like them at all or think they are effective.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43730150">https://news.ycombinator.com/item?id=43730150</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Fri, 18 Apr 2025 17:32:26 +0000</pubDate><link>https://hinterviewgpt.com</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=43730150</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43730150</guid></item><item><title><![CDATA[New comment by iknownthing in "What, exactly, is an 'AI Agent'? Here's a litmus test"]]></title><description><![CDATA[
<p>Seems like more of a special case than a different thing altogether</p>
]]></description><pubDate>Wed, 02 Apr 2025 22:13:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=43562303</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=43562303</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43562303</guid></item><item><title><![CDATA[New comment by iknownthing in "Launch HN: Promptless (YC W25) – Automatic updates for customer-facing docs"]]></title><description><![CDATA[
<p>I'm curious, does it get triggered when a PR is opened or when it is merged?  Because if it is when it is opened, updates to the PR could still get made which I assume would cause updates to the doc changes.  Also, what if 2 PRs are opened at the same time?  What if a PR is opened but never merged?</p>
]]></description><pubDate>Tue, 18 Feb 2025 18:51:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=43093572</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=43093572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43093572</guid></item><item><title><![CDATA[New comment by iknownthing in "On David Lynch's Revenge of the Jedi (2018)"]]></title><description><![CDATA[
<p>Cronenberg also turned down an offer to direct Top Gun believe it or not</p>
]]></description><pubDate>Tue, 18 Feb 2025 16:06:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43091132</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=43091132</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43091132</guid></item><item><title><![CDATA[New comment by iknownthing in "Why Blog If Nobody Reads It?"]]></title><description><![CDATA[
<p>If you put it on your resume the right people will probably read it</p>
]]></description><pubDate>Mon, 10 Feb 2025 01:08:29 +0000</pubDate><link>https://news.ycombinator.com/item?id=42995878</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42995878</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42995878</guid></item><item><title><![CDATA[New comment by iknownthing in "David Lynch has died"]]></title><description><![CDATA[
<p>Lynch had the cojones to show a guild navigator</p>
]]></description><pubDate>Thu, 16 Jan 2025 20:33:52 +0000</pubDate><link>https://news.ycombinator.com/item?id=42730518</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42730518</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42730518</guid></item><item><title><![CDATA[New comment by iknownthing in "1 in 5 online job postings are either fake or never filled, study finds"]]></title><description><![CDATA[
<p>I worked at a company that created fake job postings for H1B reasons.</p>
]]></description><pubDate>Tue, 14 Jan 2025 16:07:41 +0000</pubDate><link>https://news.ycombinator.com/item?id=42698989</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42698989</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42698989</guid></item><item><title><![CDATA[New comment by iknownthing in "Employees are bypassing HR, sharing on LinkedIn"]]></title><description><![CDATA[
<p>Probably better to do it on anonymous social media</p>
]]></description><pubDate>Sun, 12 Jan 2025 18:48:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=42675764</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42675764</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42675764</guid></item><item><title><![CDATA[New comment by iknownthing in "I am rich and have no idea what to do"]]></title><description><![CDATA[
<p>Is this not the same problem everyone faces when they retire?</p>
]]></description><pubDate>Fri, 03 Jan 2025 00:22:06 +0000</pubDate><link>https://news.ycombinator.com/item?id=42580634</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42580634</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42580634</guid></item><item><title><![CDATA[New comment by iknownthing in "Eyes Wide Shut: Hidden in plain sight"]]></title><description><![CDATA[
<p>Well they are supposed to be confessions yet he does not confess the worst part of what he witnessed.  IIRC he literally says "I'll tell you everything" then proceeds to leave things out.  Somewhat contradictory.</p>
]]></description><pubDate>Mon, 16 Dec 2024 18:29:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=42433785</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42433785</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42433785</guid></item><item><title><![CDATA[New comment by iknownthing in "Eyes Wide Shut: Hidden in Plain Sight"]]></title><description><![CDATA[
<p>I get that but it doesn't even make sense in terms of the dream logic.  Bill basically has two "confession" scenes - one with Zeigler and one with his wife and seemingly doesn't mention the stuff at the costume shop in either. However he was open to confess to Zeigler that he knew about the dead woman.  This seems contradictory.  I think there are two possibilities 1) it was unfinished or 2) it was intentionally left out.  Both of which are very interesting.</p>
]]></description><pubDate>Mon, 16 Dec 2024 16:33:19 +0000</pubDate><link>https://news.ycombinator.com/item?id=42432584</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42432584</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42432584</guid></item><item><title><![CDATA[New comment by iknownthing in "Eyes Wide Shut: Hidden in Plain Sight"]]></title><description><![CDATA[
<p>What I never understood about this movie is how it never connected the pieces.  The beginning of the movie when bill was with the drunk women and they say "where the rainbow ends..." it clearly connects to the "rainbow" costume shop later where the sinister stuff with the owners child happens.  Then it's learned that the women at the beginning of the movie were the same women who were at the secret society party this clearly connects the secret society to the sinister stuff at the costume shop.  So the connections are clear and bill is privy to all of it yet it is never explicitly stated at the end of the movie.  Perhaps Kubrick didn't actually finish it.</p>
]]></description><pubDate>Sun, 15 Dec 2024 23:58:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=42426846</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42426846</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42426846</guid></item><item><title><![CDATA[Show HN: Diagify – CLI to convert natural language into technical diagrams]]></title><description><![CDATA[
<p>Hello HN,<p>I've created Diagify, a CLI for converting natural language to technical diagrams.  There are a bunch of diagram-as-code tools out there and I had the idea that it should be possible to use LLMs to generate the diagram code based on a natural language description, execute it, and return the generated image.  This is essentially what Diagify does.<p>More specifically, Diagify generates code for the Mingrammer library which is specifically for technical diagrams.  It also uses the OpenAI API to generate the Mingrammer code from the natural language description.<p>The workflow is first OpenAI generates the Mingrammer python code based on the description, then it does some basic error checking.  It was found that often the generated Mingrammer code had incorrect imports so this is checked.  If incorrect imports are found, OpenAI is called again for correction with the incorrect imports identified and also some suggested imports to replace them with.  Then the Mingrammer code is executed.  If it is executed successfully the corresponding image is generated.  If there is a runtime error then OpenAI is called again with the identified error in an effort to correct it.  It's become somewhat reliable at this point.<p>My reasons for creating Diagify are two-fold 1) to see if it even would work and 2) creating technical diagrams by hand or even using diagram-as-code tools can be tedious so using a simple natural language interface could be helpful.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42420049">https://news.ycombinator.com/item?id=42420049</a></p>
<p>Points: 5</p>
<p># Comments: 0</p>
]]></description><pubDate>Sat, 14 Dec 2024 22:50:46 +0000</pubDate><link>https://github.com/alexminnaar/Diagify</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42420049</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42420049</guid></item><item><title><![CDATA[New comment by iknownthing in "Show HN: Quantus – LeetCode for Financial Modeling"]]></title><description><![CDATA[
<p>People generally use leetcode to prepare for interviews and learning is kind of a side-effect so I'm curious are these questions similar to those you would find in interviews (i.e. like leetcode) or is it more for general learning purposes?</p>
]]></description><pubDate>Thu, 12 Dec 2024 15:41:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=42400078</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42400078</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42400078</guid></item><item><title><![CDATA[New comment by iknownthing in "Stanley Kubrick's the Shining Maps of the Overlook"]]></title><description><![CDATA[
<p>> with layers upon layers upon layers of meaning<p>What were the layers of meaning? To me, the shining is kubricks most straight-forward movie.</p>
]]></description><pubDate>Wed, 11 Dec 2024 14:10:00 +0000</pubDate><link>https://news.ycombinator.com/item?id=42387817</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42387817</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42387817</guid></item><item><title><![CDATA[New comment by iknownthing in "Task-Specific LLM Evals That Do and Don't Work"]]></title><description><![CDATA[
<p>interesting</p>
]]></description><pubDate>Mon, 09 Dec 2024 16:34:25 +0000</pubDate><link>https://news.ycombinator.com/item?id=42367704</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42367704</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42367704</guid></item><item><title><![CDATA[New comment by iknownthing in "Show HN: I combined spaced repetition with emails so you can remember anything"]]></title><description><![CDATA[
<p>Wish this existed when I was in school</p>
]]></description><pubDate>Wed, 04 Dec 2024 18:28:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=42320405</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42320405</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42320405</guid></item><item><title><![CDATA[Ask HN: Are there any best practices for getting LLMs to return *only* code?]]></title><description><![CDATA[
<p>When you ask some of the popular LLMs to generate code it seems like by default it returns something like "Sure!, here is the code below python```<some_python_code>```" but what if you only want the <some_python_code> part?  I managed to make it work by including some example outputs in the prompt but I was wondering if there was a best practice for this?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=42196626">https://news.ycombinator.com/item?id=42196626</a></p>
<p>Points: 1</p>
<p># Comments: 1</p>
]]></description><pubDate>Wed, 20 Nov 2024 18:10:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=42196626</link><dc:creator>iknownthing</dc:creator><comments>https://news.ycombinator.com/item?id=42196626</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=42196626</guid></item></channel></rss>