
  <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
      <title>Karan Prasad - Founder, Obvix Labs</title>
      <link>https://karanprasad.com/blog</link>
      <description>Founder of Obvix Labs, an AI research lab building systems people can rely on. I write about what we&#39;re learning - AI safety, backend architecture, and what it takes to ship real software.</description>
      <language>en-us</language>
      <managingEditor>hello@karanprasad.com (Karan Prasad)</managingEditor>
      <webMaster>hello@karanprasad.com (Karan Prasad)</webMaster>
      <lastBuildDate>Thu, 02 Apr 2026 00:00:00 GMT</lastBuildDate>
      <atom:link href="https://karanprasad.com/feed.xml" rel="self" type="application/rss+xml"/>
      
  <item>
    <guid>https://karanprasad.com/blog/how-claude-code-actually-works-reverse-engineering-512k-lines</guid>
    <title>How Claude Code Actually Works: Reverse-Engineering 512K Lines of Production AI Agent</title>
    <link>https://karanprasad.com/blog/how-claude-code-actually-works-reverse-engineering-512k-lines</link>
    <description>Everyone found the Easter eggs. We mapped the engineering. 82 documents, 112K lines of original analysis, 16 architectural diagrams  the complete architectural reconstruction of how Anthropic builds a production AI agent.</description>
    <pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>ai</category><category>reverse-engineering</category><category>architecture</category><category>security</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/80433-trials-llm-sycophancy</guid>
    <title>I Ran 80,433 Trials to Measure LLM Sycophancy. Here&#39;s What Actually Drives It.</title>
    <link>https://karanprasad.com/blog/80433-trials-llm-sycophancy</link>
    <description>Does filling up a model&#39;s context window make it more likely to agree with you when you&#39;re wrong? I ran 80,433 trials across 6 models to find out. Context length matters less than you&#39;d think - the conversational pattern is what really drives sycophancy.</description>
    <pubDate>Sat, 21 Mar 2026 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>ai</category><category>research</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/perplexity-pplx-embed-context-aware-embeddings-rag</guid>
    <title>Your RAG Pipeline Has a Context Problem. Perplexity Just Open-Sourced the Fix.</title>
    <link>https://karanprasad.com/blog/perplexity-pplx-embed-context-aware-embeddings-rag</link>
    <description>A deep technical breakdown of Perplexity&#39;s pplx-embed-v1 and pplx-embed-context-v1 - diffusion-pretrained, INT8-native embedding models that fix RAG&#39;s oldest problem: context loss at chunk boundaries. Architecture, benchmarks, tradeoffs, and what changes for developers.</description>
    <pubDate>Sat, 07 Mar 2026 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>ai</category><category>research</category><category>rag</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/vl-jepa-embedding-prediction-vision-language-models</guid>
    <title>Building Real-Time Vision Models: How VL-JEPA Achieves 2.85× Faster Inference</title>
    <link>https://karanprasad.com/blog/vl-jepa-embedding-prediction-vision-language-models</link>
    <description> Meta&#39;s VL-JEPA predicts embeddings instead of tokens, achieving 50% parameter reduction and 2.85× faster inference. Here&#39;s how embedding prediction changes vision-language models.</description>
    <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>ai</category><category>research</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/modern-customer-support-architecture-deep-dive</guid>
    <title>Modern Customer Support Architecture: An Engineer&#39;s Deep Dive</title>
    <link>https://karanprasad.com/blog/modern-customer-support-architecture-deep-dive</link>
    <description>A technical blueprint for architects and engineers building the next generation of customer support. Learn how to leverage LLMs, vector databases, and event-driven architecture to move from reactive queues to predictive, intelligent, and scalable support systems.</description>
    <pubDate>Wed, 03 Dec 2025 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>research</category><category>automation</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/macbook-screen-glitch-software-bug</guid>
    <title>How I Beat a Persistent MacBook Screen Glitch with a Hidden macOS Command</title>
    <link>https://karanprasad.com/blog/macbook-screen-glitch-software-bug</link>
    <description>Solved a persistent MacBook M1 screen glitch by resetting the LaunchServices database using lsregister -kill, fixing system UI and flicker issues instantly after all else failed.</description>
    <pubDate>Sun, 05 Oct 2025 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>macOS</category><category>how to</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/the-dark-side-of-ai-validation</guid>
    <title>When the Mirror Always Says “Yes”: Why Today’s AI Chatbots Can Hurt More Than Help</title>
    <link>https://karanprasad.com/blog/the-dark-side-of-ai-validation</link>
    <description>AI chatbots that always agree can feel supportive, but they risk reinforcing self-doubt, overthinking, and emotional dependence especially in vulnerable people. They often miss warning signs like crisis or delusional thinking. Safer designs need more grounding, challenge, memory, and human oversight. Use AI as companion, not counselor.</description>
    <pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>ai</category><category>research</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/self-hosted-llm-conversational-ai-stack</guid>
    <title>Building a Full‑Stack, Policy‑Agnostic Conversational Commerce AI with FSM &amp; ML-Driven CTA</title>
    <link>https://karanprasad.com/blog/self-hosted-llm-conversational-ai-stack</link>
    <description>How I architected a modular, self‑hosted persona engine using FSMs, microservices, and ML for conversion-focused chat systems</description>
    <pubDate>Tue, 05 Aug 2025 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>ai</category><category>learning</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/ethereum-holder-scraper-pipeline-twitter-linkedin</guid>
    <title>End-to-End Ethereum Holder Scraper: Token to Twitter &amp; LinkedIn</title>
    <link>https://karanprasad.com/blog/ethereum-holder-scraper-pipeline-twitter-linkedin</link>
    <description>Learn how a single Puppeteer token hack powered a lightweight Python pipeline that maps top Ethereum holders to their Twitter &amp; LinkedIn profiles.</description>
    <pubDate>Fri, 01 Aug 2025 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>ai</category><category>learning</category>
  </item>

  <item>
    <guid>https://karanprasad.com/blog/control-your-stack-dump-the-black-boxes</guid>
    <title>Control your stack DUMP the BLack Boxes</title>
    <link>https://karanprasad.com/blog/control-your-stack-dump-the-black-boxes</link>
    <description>How we won a 36-hour hackathon by ditching complex AI orchestration frameworks for bare-bones RAG implementation. A practical guide to building real-time Google Drive knowledge bases, debugging dependency hell, and why simple microservices beat opaque abstractions in high-pressure environments.</description>
    <pubDate>Sat, 26 Jul 2025 00:00:00 GMT</pubDate>
    <author>hello@karanprasad.com (Karan Prasad)</author>
    <category>ai</category><category>automation</category>
  </item>

    </channel>
  </rss>
