<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: ticktockten</title><link>https://news.ycombinator.com/user?id=ticktockten</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Sat, 25 Apr 2026 10:51:02 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=ticktockten" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[Show HN: Fast-Axolotl – Rust extensions that make Axolotl fine-tuning 77x faster]]></title><description><![CDATA[
<p>I built Rust extensions for Axolotl that dramatically speed up data loading and preprocessing for LLM fine-tuning.<p>The problem: Python data pipelines become the bottleneck when fine-tuning large models. Your GPUs sit idle waiting for data.<p>The solution: Drop-in Rust acceleration. One import line, zero config changes.<p>Results on 50k rows:
 - Streaming data loading: 0.009s vs 0.724s (77x faster)
 - Parallel SHA256 hashing: 0.027s vs 0.052s (1.9x faster)<p>Works with Parquet, Arrow, JSON, JSONL, CSV. Supports compression. Cross-platform.<p>Usage:<p>import fast_axolotl
import axolotl  # now accelerated
pip install fast-axolotl<p>Built with PyO3 and maturin. MIT licensed. Happy to answer questions about the Rust/Python interop or benchmark methodology.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=47338327">https://news.ycombinator.com/item?id=47338327</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 11 Mar 2026 17:13:41 +0000</pubDate><link>https://github.com/neul-labs/fast-axolotl</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=47338327</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47338327</guid></item><item><title><![CDATA[Show HN: SolScript – Write Solidity, compile to Solana programs]]></title><description><![CDATA[
<p>Hey HN,
I built SolScript, a compiler that lets you write smart contracts in Solidity syntax and deploy them to Solana.<p>The problem: Solana has mass dev interest (17k+ active developers in 2025), but the Rust learning curve remains a 3-6 month barrier. Anchor helps, but you still need to grok ownership, lifetimes, and borrowing. Meanwhile, there are 30k+ Solidity developers who already know how to write smart contracts.<p>SolScript bridges that gap. You write this:<p><pre><code>  contract Token {
      mapping(address => uint256) public balanceOf;

      function transfer(address to, uint256 amount) public {
          balanceOf[msg.sender] -= amount;
          balanceOf[to] += amount;
          emit Transfer(msg.sender, to, amount);
      }
  }
</code></pre>
And it compiles to a native Solana program with automatic PDA derivation, account validation, and full Anchor compatibility.
How it works:<p>- Parser turns Solidity-like source into an AST - Type checker validates and annotates<p>- Two codegen backends: (1) Anchor/Rust output that goes through cargo build-sbf, or (2) direct LLVM-to-BPF compilation - Mappings become PDAs automatically, account structs are derived from your type system<p>What's supported:<p>- State variables, structs, arrays, nested mappings<p>- Events and custom errors<p>- Modifiers (inlined)<p>- Cross-program invocation (CPI)<p>- SPL Token operations - msg.sender, block.timestamp equivalents<p>Current limitations:<p>- No msg.value for incoming SOL (use wrapped SOL or explicit transfers) - No Token 2022 support yet (planned for v0.4) - Modifiers are inlined, so keep them small<p>The output is standard Anchor/Rust code. You can eject anytime and continue in pure Rust. It's a launchpad, not a lock-in.<p>Written in Rust. Ships with a VS Code extension (LSP, syntax highlighting, go-to-definition, autocomplete).<p>Install: cargo install solscript-cli<p>I'd love feedback on the language design, the compilation approach, or use cases I haven't thought of. Happy to answer questions about the internals.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46693924">https://news.ycombinator.com/item?id=46693924</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 20 Jan 2026 16:37:19 +0000</pubDate><link>https://github.com/cryptuon/solscript</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=46693924</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46693924</guid></item><item><title><![CDATA[Show HN: SolScript – Write Solidity, compile to Solana programs]]></title><description><![CDATA[
<p>Hey HN,<p>I built SolScript, a compiler that lets you write smart contracts in Solidity syntax and deploy them to Solana.<p>The problem: Solana has mass dev interest (17k+ active developers in 2025), but the Rust learning curve remains a 3-6 month barrier. Anchor helps, but you still need to grok ownership, lifetimes, and borrowing. Meanwhile, there are 30k+ Solidity developers who already know how to write smart contracts.<p>SolScript bridges that gap. You write this:<p><pre><code>  contract Token {
      mapping(address => uint256) public balanceOf;

      function transfer(address to, uint256 amount) public {
          balanceOf[msg.sender] -= amount;
          balanceOf[to] += amount;
          emit Transfer(msg.sender, to, amount);
      }
  }
</code></pre>
And it compiles to a native Solana program with automatic PDA derivation, account validation, and full Anchor compatibility.<p>How it works:<p>- Parser turns Solidity-like source into an AST
- Type checker validates and annotates
- Two codegen backends: (1) Anchor/Rust output that goes through cargo build-sbf, or (2) direct LLVM-to-BPF compilation
- Mappings become PDAs automatically, account structs are derived from your type system<p>What's supported:<p>- State variables, structs, arrays, nested mappings
- Events and custom errors  
- Modifiers (inlined)
- Cross-program invocation (CPI)
- SPL Token operations
- msg.sender, block.timestamp equivalents<p>Current limitations:<p>- No msg.value for incoming SOL (use wrapped SOL or explicit transfers)
- No Token 2022 support yet (planned for v0.4)
- Modifiers are inlined, so keep them small<p>The output is standard Anchor/Rust code. You can eject anytime and continue in pure Rust. It's a launchpad, not a lock-in.<p>Written in Rust. Ships with a VS Code extension (LSP, syntax highlighting, go-to-definition, autocomplete).<p>Install: cargo install solscript-cli<p>Repo: <a href="https://github.com/cryptuon/solscript" rel="nofollow">https://github.com/cryptuon/solscript</a><p>I'd love feedback on the language design, the compilation approach, or use cases I haven't thought of. Happy to answer questions about the internals.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46692514">https://news.ycombinator.com/item?id=46692514</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Tue, 20 Jan 2026 15:04:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=46692514</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=46692514</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46692514</guid></item><item><title><![CDATA[New comment by ticktockten in "[dead]"]]></title><description><![CDATA[
<p>Looks cool :).<p>What does BJH stand for though?</p>
]]></description><pubDate>Wed, 10 Dec 2025 11:50:21 +0000</pubDate><link>https://news.ycombinator.com/item?id=46216697</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=46216697</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46216697</guid></item><item><title><![CDATA[Show HN: LangGraph profiling – 737x Faster Checkpoints via Rust (PyO3)]]></title><description><![CDATA[
<p>Building AI agents with LangGraph, I noticed graph invocations were slow even before hitting the LLM. Dug into the Pregel execution engine to find out why.<p>THE PROBLEM<p>Profiled my LangGraph agents. 50-100ms per invocation, most of it not the LLM. Found two culprits:<p>1. ThreadPoolExecutor created fresh every invoke() — 20ms overhead<p>2. Checkpointing uses deepcopy() — 52ms for 35KB state, 206ms for 250KB<p>THE FIX<p>Rewrote hot paths in Rust via PyO3:<p>Checkpoint serialization (serde vs deepcopy):<p>35KB state:   0.29ms vs 52ms    = 178x faster<p>250KB state:  0.28ms vs 206ms   = 737x faster<p>E2E with checkpointing: 2-3x faster<p>Drop-in usage:<p>export FAST_LANGGRAPH_AUTO_PATCH=1<p># or explicit
from fast_langgraph import RustSQLiteCheckpointer<p>checkpointer = RustSQLiteCheckpointer("state.db")<p>KEY INSIGHT<p>PyO3 boundary costs ~1-2μs per call. Rust only wins when you:<p>- Avoid intermediate Python objects (checkpoint serialization)<p>- Batch operations (channel updates)<p>- Handle large data (state > 10KB)<p>For simple dict ops, Python's C-dict still wins.<p>Architecture: Python orchestration (compatibility) + Rust hot paths (performance).<p>Runs regular compatibility checks!<p>MIT licensed. Feedback welcome.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=46216644">https://news.ycombinator.com/item?id=46216644</a></p>
<p>Points: 2</p>
<p># Comments: 0</p>
]]></description><pubDate>Wed, 10 Dec 2025 11:42:31 +0000</pubDate><link>https://github.com/neul-labs/fast-langgraph</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=46216644</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46216644</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality"]]></title><description><![CDATA[
<p>Well several times faster, but not interesting enough to say that use this. For me it personally was an exploratory project to review litellm and its internals.<p>The LLM docgen in this case Claude has been over enthusiastic due to my incessant prodding :D.</p>
]]></description><pubDate>Tue, 18 Nov 2025 19:38:27 +0000</pubDate><link>https://news.ycombinator.com/item?id=45970978</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=45970978</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45970978</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality"]]></title><description><![CDATA[
<p>Well i would counter that by saying most code has been autocompleted for a while. At this point in software development history, discussing the size of commits is a null discussion :).</p>
]]></description><pubDate>Tue, 18 Nov 2025 19:33:57 +0000</pubDate><link>https://news.ycombinator.com/item?id=45970899</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=45970899</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45970899</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality"]]></title><description><![CDATA[
<p>The core is real, the rest of the narrative nudging LLMs to behave :). If you remove the noise and just run the benchmark that's proof enough.<p>The interesting bit was that the bindings overheads dominate, and makes this shim not that much of a performance bump.</p>
]]></description><pubDate>Tue, 18 Nov 2025 19:31:43 +0000</pubDate><link>https://news.ycombinator.com/item?id=45970873</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=45970873</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45970873</guid></item><item><title><![CDATA[Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality]]></title><description><![CDATA[
<p>I've been working on Fast LiteLLM - a Rust acceleration layer for the popular LiteLLM library - and I had some interesting learnings that might resonate with other developers trying to squeeze performance out of existing systems.<p>My assumption was that LiteLLM, being a Python library, would have plenty of low-hanging fruit for optimization. I set out to create a Rust layer using PyO3 to accelerate the performance-critical parts: token counting, routing, rate limiting, and connection pooling.<p>The Approach<p>- Built Rust implementations for token counting using tiktoken-rs<p>- Added lock-free data structures with DashMap for concurrent operations<p>- Implemented async-friendly rate limiting<p>- Created monkeypatch shims to replace Python functions transparently<p>- Added comprehensive feature flags for safe, gradual rollouts<p>- Developed performance monitoring to track improvements in real-time<p>After building out all the Rust acceleration, I ran my comprehensive benchmark comparing baseline LiteLLM vs. the shimmed version:<p>Function             Baseline Time   Shimmed Time    Speedup    Improvement  Status<p>token_counter        0.000035s     0.000036s     0.99x          -0.6%<p>count_tokens_batch   0.000001s     0.000001s     1.10x          +9.1%<p>router               0.001309s     0.001299s     1.01x          +0.7%<p>rate_limiter         0.000000s     0.000000s     1.85x         +45.9%<p>connection_pool      0.000000s     0.000000s     1.63x         +38.7%<p>Turns out LiteLLM is already quite well-optimized! The core token counting was essentially unchanged (0.6% slower, likely within measurement noise), and the most significant gains came from the more complex operations like rate limiting and connection pooling where Rust's concurrent primitives made a real difference.<p>Key Takeaways<p>1. Don't assume existing libraries are under-optimized - The maintainers likely know their domain well
2. Focus on algorithmic improvements over reimplementation - Sometimes a better approach beats a faster language
3. Micro-benchmarks can be misleading - Real-world performance impact varies significantly
4. The most gains often come from the complex parts, not the simple operations
5. Even "modest" improvements can matter at scale - 45% improvements in rate limiting are meaningful for high-throughput applications<p>While the core token counting saw minimal improvement, the rate limiting and connection pooling gains still provide value for high-volume use cases. The infrastructure I built (feature flags, performance monitoring, safe fallbacks) creates a solid foundation for future optimizations.<p>The project continues as Fast LiteLLM on GitHub for anyone interested in the Rust-Python integration patterns, even if the performance gains were humbling.<p>Edit: To clarify - the negative performance for token_counter is likely in the noise range of measurement, suggesting that LiteLLM's token counting is already well-optimized. The 45%+ gains in rate limiting and connection pooling still provide value for high-throughput applications.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45968461">https://news.ycombinator.com/item?id=45968461</a></p>
<p>Points: 27</p>
<p># Comments: 9</p>
]]></description><pubDate>Tue, 18 Nov 2025 16:32:16 +0000</pubDate><link>https://github.com/neul-labs/fast-litellm</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=45968461</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45968461</guid></item><item><title><![CDATA[Show HN: FastWorker – Task queue for Python with no external dependencies]]></title><description><![CDATA[
<p>I built FastWorker after getting tired of deploying Celery + Redis for simple background tasks in FastAPI apps. Every time I needed to offload work from API requests, I had to manage 4-6 separate services. For small projects, this felt like overkill.<p>FastWorker is a brokerless task queue requiring only Python processes. No Redis, no RabbitMQ – just 2-3 Python services instead of 4-6+.<p>---<p>Quick example:<p># tasks.py<p>from fastworker import task<p>@task<p>def send_email(to: str, subject: str):<p><pre><code>    return {"sent": True}
</code></pre>
# FastAPI app<p>from fastworker import Client<p>client = Client()<p>@app.post("/send/")
async def send_notification(email: str):<p><pre><code>    task_id = await client.delay("send_email", email, "Welcome!")

    return {"task_id": task_id}
</code></pre>
Start workers:<p>fastworker control-plane --task-modules tasks<p>fastworker subworker --task-modules tasks  # optional<p>---<p>Architecture: Uses NNG messaging for direct peer-to-peer communication. Control plane coordinates task distribution via priority heap and tracks worker load. Workers auto-discover via discovery socket. Results cached in-memory with LRU/TTL.<p>Designed for: Moderate-scale Python apps (1K-10K tasks/min) doing background processing – image resizing, report generation, emails, webhooks. Great for FastAPI/Flask/Django.<p>NOT for: Extreme scale (100K+ tasks/min), multi-language stacks, or systems requiring persistent task storage. For those, use Celery/RabbitMQ/Kafka.<p>Try it:<p>pip install fastworker<p>Repo: <a href="https://github.com/neul-labs/fastworker" rel="nofollow">https://github.com/neul-labs/fastworker</a><p>FastAPI integration docs: <a href="https://github.com/neul-labs/fastworker/blob/main/docs/fastapi.md" rel="nofollow">https://github.com/neul-labs/fastworker/blob/main/docs/fasta...</a><p>Would love feedback on whether this fills a useful niche or if the limitations make it too narrow.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=45952809">https://news.ycombinator.com/item?id=45952809</a></p>
<p>Points: 1</p>
<p># Comments: 0</p>
]]></description><pubDate>Mon, 17 Nov 2025 11:51:01 +0000</pubDate><link>https://github.com/neul-labs/fastworker</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=45952809</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=45952809</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Zyler – AI agent for marketing data that doesn't hallucinate"]]></title><description><![CDATA[
<p>Yes! We are looking to add Meta, Linkedin and titok. There is a lot of demand for cross channel analytics.</p>
]]></description><pubDate>Tue, 24 Jun 2025 11:52:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=44365155</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=44365155</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44365155</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Zyler – AI agent for marketing data that doesn't hallucinate"]]></title><description><![CDATA[
<p>Thanks for checking us out!<p>We are looking to integrate more data connectors.<p>Do you have any suggestions? We are focused on the google ecosystem to begin with.</p>
]]></description><pubDate>Tue, 24 Jun 2025 10:39:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=44364698</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=44364698</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44364698</guid></item><item><title><![CDATA[Show HN: Zyler – AI agent for marketing data that doesn't hallucinate]]></title><description><![CDATA[
<p>My co-founder Suryansh (product marketing expert) and I were burning way too many nights debugging why our startup's conversion rates were tanking. I'd spent years building adtech and martech software, but we were still juggling between Google Analytics, Google Ads, YouTube Analytics, and SEO tools – each with their own dashboard that felt like archaeological expeditions. After burning <i>6+ hours weekly</i> just to connect the dots between platforms, we realized we needed something fundamentally different.<p>So we built Zyler – an AI agent that connects to all your marketing channels (Google Analytics, Ads, SEO, YouTube) and generates unified insights through natural language, without the hallucination problem that plagues most AI analytics tools.<p><i>The core problem:</i> Marketing teams are drowning in fragmented data across multiple platforms right as Google deprecates cookies in 2025. You need one dashboard for GA, another for Google Ads, a third for SEO metrics, YouTube analytics somewhere else – and none of them talk to each other. Most AI solutions hallucinate when trying to connect these dots.<p>What makes this technically interesting:<p><i>Zero-hallucination architecture:</i> We've focused heavily on accuracy over creativity – the AI only makes claims it can back up with your actual data across platforms<p><i>Natural language to analytics translation:</i> No more switching between 4+ dashboards to understand campaign performance<p><i>Mobile-first analytics:</i> First mobile-compatible multi-platform analytics AI agent<p><i>Real-time processing:</i> Instant insights from large datasets across GA, Ads, SEO, and YouTube without the usual loading screens<p><i>Cross-platform insights:</i> Ask "Why did my YouTube ads perform better than Google Ads last month?" and get unified analysis across all your channels<p>The drag-and-drop interface makes anyone a "marketing data expert" in about 5 minutes. We're seeing customers reduce reporting time by 80% while getting unified insights across all their marketing channels.<p><i>Early traction:</i> Hit Product Hunt top 10, users across 6 continents, and some customers seeing 300% ROI increases. All organic growth so far – no funding raised yet, just Suryansh and me bootstrapping this.<p><i>Pricing:</i> $50/month (down from our original $99 pricing) vs the $1000+/month enterprise alternatives, which makes unified marketing analytics accessible for startups and small teams.<p>It's still rough around the edges, and we're working on expanding to Meta Ads and LinkedIn next.<p>I'd love feedback, especially on:
What other marketing platforms should we integrate next? (Meta, LinkedIn, TikTok?)
How do you currently handle cross-platform marketing attribution?
Any interest in an API for custom integrations?<p>Try it out at <a href="https://www.zyler.ai" rel="nofollow">https://www.zyler.ai</a> – there's a free tier to test with your marketing data.<p>Happy to answer any technical questions about the AI architecture or product direction – Suryansh can speak to the marketing side and I can dive deep on the engineering!</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=44364508">https://news.ycombinator.com/item?id=44364508</a></p>
<p>Points: 7</p>
<p># Comments: 4</p>
]]></description><pubDate>Tue, 24 Jun 2025 10:04:45 +0000</pubDate><link>https://www.zyler.ai</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=44364508</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44364508</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Unzoi – Research tool that minimizes information misrepresentation"]]></title><description><![CDATA[
<p>Hey, can you try again?<p>Thanks!</p>
]]></description><pubDate>Tue, 01 Apr 2025 09:57:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43544878</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=43544878</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43544878</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Unlob – A search engine that reads websites and gives you direct answer"]]></title><description><![CDATA[
<p>That makes sense, specifically for technical guides.<p>It not well designed for those cases for sure.<p>Would you be up for a chat if i drop you an email? More than happy to learn your general views :)</p>
]]></description><pubDate>Mon, 31 Mar 2025 18:40:24 +0000</pubDate><link>https://news.ycombinator.com/item?id=43538247</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=43538247</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43538247</guid></item><item><title><![CDATA[New comment by ticktockten in "Ask HN: What are you working on? (March 2025)"]]></title><description><![CDATA[
<p>I have been working towards building and understanding the shift in search / information retrieval.<p>To that end, I just did a show HN on a couple of my projects<p>What if google did not focus on hyper commercialisation? 10 Blue links, but sorted by how they answer your query- <a href="https://www.unlob.com" rel="nofollow">https://www.unlob.com</a><p>Can we answer questions with lesser hallucinations? A snippet cited answer engine which only picks links that focus on answering your query - <a href="https://www.unzoi.com" rel="nofollow">https://www.unzoi.com</a></p>
]]></description><pubDate>Mon, 31 Mar 2025 15:29:18 +0000</pubDate><link>https://news.ycombinator.com/item?id=43536171</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=43536171</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43536171</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Unlob – A search engine that reads websites and gives you direct answer"]]></title><description><![CDATA[
<p>Here is a followup from the other tool i am working on<p><a href="https://www.unzoi.com/query/set-up-model-context-protocol-server-postgresql" rel="nofollow">https://www.unzoi.com/query/set-up-model-context-protocol-se...</a></p>
]]></description><pubDate>Mon, 31 Mar 2025 15:25:01 +0000</pubDate><link>https://news.ycombinator.com/item?id=43536119</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=43536119</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43536119</guid></item><item><title><![CDATA[Show HN: Unzoi – Research tool that minimizes information misrepresentation]]></title><description><![CDATA[
<p>I built Unzoi, a research tool designed to address critical issues I've observed with existing AI assistants and search tools.<p>Current research tools often struggle with accurately representing source material. They frequently introduce factual errors, misattribute quotes, present opinions as facts, and fail to provide sufficient context.<p>Many AI assistants also add their own interpretations or editorializing that wasn't present in the original sources.<p>Unzoi takes a different approach by:<p>- Extracting only information that directly answers the query<p>- Maintaining critical context to prevent misunderstandings<p>- Clearly differentiating between facts and opinions from sources<p>- Avoiding the introduction of unsourced claims or commentary<p>- Preserving the integrity of quotes without alteration<p>The tool is particularly useful for researching complex topics where accuracy is essential and misrepresentation could be harmful.<p>Some example queries:<p>- Video games with highest learning curves for new players: <a href="https://www.unzoi.com/query/video-games-have-highest-learning-curves-new-players" rel="nofollow">https://www.unzoi.com/query/video-games-have-highest-learnin...</a><p>- Setting up model context protocol servers with PostgreSQL: <a href="https://www.unzoi.com/query/set-up-model-context-protocol-server-postgresql" rel="nofollow">https://www.unzoi.com/query/set-up-model-context-protocol-se...</a><p>- Eligibility criteria for assisted dying in the UK: <a href="https://www.unzoi.com/query/eligibility-criteria-assisted-dying-uk" rel="nofollow">https://www.unzoi.com/query/eligibility-criteria-assisted-dy...</a><p>I'd appreciate feedback from the HN community on both the concept and implementation.<p>How do you currently handle these challenges in your research workflows?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43536101">https://news.ycombinator.com/item?id=43536101</a></p>
<p>Points: 2</p>
<p># Comments: 2</p>
]]></description><pubDate>Mon, 31 Mar 2025 15:23:46 +0000</pubDate><link>https://news.ycombinator.com/item?id=43536101</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=43536101</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43536101</guid></item><item><title><![CDATA[New comment by ticktockten in "Show HN: Unlob – A search engine that reads websites and gives you direct answer"]]></title><description><![CDATA[
<p>Well the goal is not to take anyone's job. Ideally, we make our present tools a lot better and trustworthy.<p>Thanks for trying it out!</p>
]]></description><pubDate>Mon, 31 Mar 2025 10:59:42 +0000</pubDate><link>https://news.ycombinator.com/item?id=43533501</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=43533501</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43533501</guid></item><item><title><![CDATA[Show HN: Unlob – A search engine that reads websites and gives you direct answer]]></title><description><![CDATA[
<p>I built Unlob because I was tired of search engines that only show links and AI tools that hallucinate answers. Unlob actually reads websites for you and extracts specific information directly from reliable sources.<p>Instead of:
- Wading through pages of SEO-optimized garbage<p>- Getting vague AI-generated text with no citations<p>- Opening 10+ tabs to find one piece of information<p>Unlob gives you precise answers with direct attribution to real sources. No AI hallucinations, no sponsored results pushing products, just factual information extracted from the web.<p>Try these examples:<p>- When is John Wick 5 coming out - <a href="https://www.unlob.com/lib/john-wick-5" rel="nofollow">https://www.unlob.com/lib/john-wick-5</a><p>- What's the best telescope under $500? - <a href="https://www.unlob.com/lib/whats-best-telescope-under-500" rel="nofollow">https://www.unlob.com/lib/whats-best-telescope-under-500</a><p>- Best low-cost robotic arms available today - <a href="https://www.unlob.com/lib/best-low-cost-robotic-arms-available" rel="nofollow">https://www.unlob.com/lib/best-low-cost-robotic-arms-availab...</a><p>I'd love your feedback, especially on search accuracy and the types of queries where this approach works best.<p>What kind of searches would you try with a tool like this?</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43528648">https://news.ycombinator.com/item?id=43528648</a></p>
<p>Points: 3</p>
<p># Comments: 5</p>
]]></description><pubDate>Sun, 30 Mar 2025 23:02:32 +0000</pubDate><link>https://www.unlob.com</link><dc:creator>ticktockten</dc:creator><comments>https://news.ycombinator.com/item?id=43528648</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43528648</guid></item></channel></rss>