EZY.AI Logo
    Back to Blog

    Single Page Applications (SPAs) and why they require an LLMs.txt file?

    5 min read

    The Invisible Website: Why Client-Side Rendering Kills AI Discovery And

    Why llms.txt Fixes It

    If your website is built on a Client-Side Rendering (CSR) framework- like React, Vue, or Angular without Server-Side Rendering (SSR), your content is likely hard-to-read for AI agents.

    Many website creation tools such as Lovale create Single Page Applications (SPAs). In fact, Lovable is excellent at creating SPAs using AI prompts, generating code (like React with Vite) and handling dynamic sections. This means Lovable is designed more for web apps than traditional SEO or AEO-focused sites due to its SPA nature.

    When a bot hits a Lovable/React SPA, it sees this:

    <!DOCTYPE html>
    <html>
    <head>
    <title>My App</title>
    </head>
    <body>
    <div id="root"></div>
    <script src="/bundle.js"></script>
    </body>
    </html>

    Here is the technical reality of why this happens and why llms.txt is the mandatory patch.

    1. The "View Source" Trap

    When a human opens your SPA, their browser downloads a bundle of JavaScript, executes it, and populates the DOM. The content appears.

    When an AI bot (like GPTBot or a custom RAG scraper) hits your URL, it typically performs a simple GET request. It does not spin up a headless browser (like Puppeteer) to execute JavaScript because that is computationally expensive and slow.

    Result: Zero content tokens. To an AI, your website is an empty shell.

    2. The Compute Economy

    Googlebot has unlimited resources to render JavaScript. AI agents do not.

    AI search engines (Perplexity, GPT) and RAG agents operate on speed and token efficiency. Running a JavaScript engine for every URL they crawl increases their compute costs by orders of magnitude.

    Therefore, they default to raw text extraction. If your content requires hydration to exist, it gets skipped.

    3. The llms.txt Bypass

    An llms.txt file is a static Markdown file sitting in your root directory. It requires no rendering, no hydration, and no JavaScript execution.

    It functions as a pre-rendered snapshot of your core data.

    • SPA approach: Bot requests URL → Receives empty HTML → Fails to render → Leaves.
    • llms.txt approach: Bot requests /llms.txt → Receives raw text → Ingests data immediately.

    The Conclusion

    If you are running a CSR application, you have two choices:

    1. Implement Server-Side Rendering (SSR) or Static Site Generation (SSG).
    2. Host a simple llms.txt file.

    If you don't do either, it's harder for LLMs to cite you.

    The patch? Simple, just create an LLMs.txt file and put it in your route directory.

    Here's a short blog post on "Why your LLMs.txt file needs engineering" and how EZY scored 94% for LLMs.txt Answer Engine Optimization (AEO) performance: https://ezy.ai/blog/rankmath-vs-ezy-ai-why-your-llms-txt-needs-engineering

    Ready to Revolutionize Your AI Visibility?

    Join the AI SEO revolution with EZY.ai and get your business found on ChatGPT and AI search platforms.

    Get Started Free