<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on Yonk-Labs</title><link>https://yonk.dev/tags/ai/</link><description>Recent content in Ai on Yonk-Labs</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>© 2026 Yonk-Labs</copyright><lastBuildDate>Thu, 16 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://yonk.dev/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Building the GraphRAG Demo: 391 SCOTUS Cases, Four Retrieval Strategies</title><link>https://yonk.dev/blog/graphrag-part3-scotus-showdown/</link><pubDate>Thu, 16 Apr 2026 00:00:00 +0000</pubDate><guid>https://yonk.dev/blog/graphrag-part3-scotus-showdown/</guid><description>Part 3 of 3. 391 real SCOTUS cases, four retrieval strategies running side by side, multi-hop Cypher that no hybrid search can match, and the production-ready 3-stage architecture you should actually ship.</description><media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://yonk.dev/blog/graphrag-part3-scotus-showdown/feature.jpg"/></item><item><title>GraphRag In Postgres: Vector+BM25 Is the Floor. Graph Is the Multiplier.</title><link>https://yonk.dev/blog/graphrag-part1-vector-vs-graph/</link><pubDate>Tue, 14 Apr 2026 00:00:00 +0000</pubDate><guid>https://yonk.dev/blog/graphrag-part1-vector-vs-graph/</guid><description>Part 1 of 3. Users ask three shapes of questions and only one of them needs a graph. Honest benchmarks, a 3-stage retrieval architecture, and why graph is a multiplier — not a replacement.</description><media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://yonk.dev/blog/graphrag-part1-vector-vs-graph/feature.jpg"/></item><item><title>Training Sets for LoRA: How to Teach a 4B Model to Write Postgres SQL Without Crying</title><link>https://yonk.dev/blog/training-sets-for-lora-nl2sql/</link><pubDate>Mon, 13 Apr 2026 00:00:00 +0000</pubDate><guid>https://yonk.dev/blog/training-sets-for-lora-nl2sql/</guid><description>A layered training corpus — domain pairs, public ballast, and a reusable Postgres syntax corpus — is 80% of the work for a NL2SQL LoRA. The training config is YAML and patience.</description><media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://yonk.dev/blog/training-sets-for-lora-nl2sql/feature.jpg"/></item><item><title>Your LLM Doesn't Know What 'Revenue' Means. Here's How to Fix That.</title><link>https://yonk.dev/blog/building-your-semantic-layer/</link><pubDate>Sun, 05 Apr 2026 00:00:00 +0000</pubDate><guid>https://yonk.dev/blog/building-your-semantic-layer/</guid><description>A step-by-step walkthrough of building a PostgreSQL semantic layer in pg_agents — crawl the schema, enrich it, define your vocabulary, lock down categoricals, and promote the queries that work.</description><media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://yonk.dev/blog/building-your-semantic-layer/feature.jpg"/></item><item><title>Why LLMs Alone Fail at NL2SQL (And What Actually Fixes It)</title><link>https://yonk.dev/blog/why-llms-fail-at-nl2sql/</link><pubDate>Fri, 03 Apr 2026 00:00:00 +0000</pubDate><guid>https://yonk.dev/blog/why-llms-fail-at-nl2sql/</guid><description>Raw LLMs hit 10-20% accuracy on real enterprise schemas with cryptic column names and tribal-knowledge joins. Here&amp;rsquo;s why, and the semantic-layer fix that takes you from toy to production.</description><media:content xmlns:media="http://search.yahoo.com/mrss/" url="https://yonk.dev/blog/why-llms-fail-at-nl2sql/feature.jpg"/></item></channel></rss>