Telaio
Introduction

AI-Ready by Design

How Telaio's fixed stack and phantom types reduce surface area for LLM coding agents.

AI-Ready by Design

Modern frameworks optimize for human flexibility: plug in any ORM, swap any cache layer, wire up any auth provider. That flexibility is valuable when a human is making deliberate architectural choices. But when an LLM coding agent is working on your codebase, flexibility becomes a liability. Every option is a decision the agent can get wrong. Every adapter is a configuration it can misconfigure.

Telaio was not built for AI. But its design choices -- a fixed stack, phantom types, a builder pattern -- happen to be exactly what makes AI-assisted development effective. Constraints, determinism, and tight feedback loops are what LLM agents need to produce correct code reliably.

The Surface Area Problem

When an LLM works on your codebase, every file it reads consumes context window budget. Infrastructure files -- config, module wiring, dependency injection setup -- are noise. They are not your business logic, but the LLM must understand them to modify anything safely.

A Telaio app's entire infrastructure story fits in a single builder chain. One file. One pattern. Compare that to what other frameworks require for the same information:

AspectNestJSSails.jsRaw FastifyTelaio
Config files in fresh project~8-10~40+2-5 (varies per project)1-2
Files per CRUD resource6-91-31-21-2
Module/wiring ceremony1 file per featureConvention-basedManual (decorators/plugins)None (builder chain)
Files to understand full app structure15-2520-405-15 (varies wildly)2-3
Standardized across projectsYes (conventions)Yes (conventions)No (every project differs)Yes (builder pattern)

The fewer files an LLM must read, the more of its context window is available for the code that actually matters: your domain logic, your route handlers, your queries.

Telaio vs. Raw Fastify

The comparison above covers NestJS and Sails.js, but the most natural question is: why not just use Fastify directly? Telaio is Fastify underneath, so what does the builder layer add for AI-assisted development?

Route-level code is identical. Telaio does not wrap or hide Fastify. fastify.get(), plugins, hooks, TypeBox schemas -- all standard Fastify. An LLM's existing Fastify knowledge transfers directly. There is no translation layer, no custom request/response abstraction, no framework-specific routing DSL.

The difference is everything around routes. In a raw Fastify project, database pools, cache clients, queue setup, config loading, and graceful shutdown are all hand-wired. Every project is a bespoke arrangement of imports, decorators, and plugin registrations. Two experienced Fastify developers will wire the same features in two completely different ways.

Consider setting up a Fastify app with a database pool and Redis cache:

// Raw Fastify -- manual wiring (simplified, real projects are worse)
import Fastify from 'fastify';
import pg from 'pg';
import Redis from 'redis';

const pool = new pg.Pool({
  host: process.env.DB_HOST,
  port: Number(process.env.DB_PORT),
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  max: Number(process.env.DB_POOL_MAX ?? 10),
});

const redis = new Redis({
  host: process.env.REDIS_HOST,
  port: Number(process.env.REDIS_PORT),
});

const fastify = Fastify({ logger: true });
fastify.decorate('pool', pool);
fastify.decorate('redis', redis);

fastify.addHook('onClose', async () => {
  await pool.end();
  redis.disconnect();
});

// Where are the types? Which decorators exist?
// An LLM has to read every file to find out.
// Telaio -- the builder chain IS the documentation
const app = await createApp(config)
  .withDatabase()
  .withCache()
  .build();

// app.db, app.cache -- typed, available, discoverable.
// Graceful shutdown handled. Config validated at startup.

The raw Fastify version scatters wiring across environment variable parsing, constructor calls, decorator registrations, and shutdown hooks. An LLM reading this project must trace through all of it to understand what is available and how it is configured. The Telaio version communicates the same information in a single builder chain -- and the phantom types guarantee that app.db and app.cache exist and are correctly typed.

For an LLM, every custom Fastify project is an unknown framework. The wiring in Project A -- its specific decorators, its config loading pattern, its error handling, its shutdown logic -- teaches the LLM nothing about Project B's wiring. Telaio standardizes that wiring, so one project's patterns apply to every project.

Why Inflexibility Helps AI

A fixed stack eliminates an entire class of decisions an LLM can get wrong.

There is no adapter pattern, so there is no wrong adapter choice. There is no ORM abstraction layer, so there is no ORM mismatch. The LLM cannot decide to use Prisma when the project uses Kysely, or reach for Memcached when the cache is Redis. These decisions are not available to it. They were made once, by the framework, and encoded into the types.

This is not a limitation for the agent -- it is a guardrail. The agent's decision space is constrained to the things that actually require decisions: your business logic, your data model, your API contracts. Not infrastructure plumbing.

Compile-Time Guardrails for AI Agents

Phantom types act as an automated code reviewer for LLM output.

If an agent generates code that calls app.cache on an app that was not built with .withCache(), the compiler rejects it. Not at runtime. Not after a deploy. Immediately, in the same feedback loop the agent uses to iterate.

// Agent writes this...
const cached = await app.cache.get('key');
//                        ^^^^^
// Property 'cache' does not exist on type
// 'TelaioApp<{ database: true; cache: false; ... }>'

The agent sees the error, understands the constraint, and either adds .withCache() to the builder chain or restructures the code. No deploy-and-pray cycle. No runtime null pointer two hours later.

Zod config validation catches the other half of the problem. If the agent sets up environment variables incorrectly, the app fails at startup with a clear validation error -- not silently in production when the wrong config value causes a subtle bug.

This creates a tight loop: generate, compile, fix, compile. The kind of loop that LLM agents are good at.

What This Means in Practice

Consider a common task: "Add a new API endpoint that queries the database and caches the result."

In a NestJS project, the agent must understand modules, dependency injection, entity decorators, the DTO pattern, cache interceptors, service injection, and how they all wire together. It needs to read and understand 6-10 files before writing a single line of business logic.

In a Telaio project, the agent reads the builder chain to see what features exist, then writes a route handler that uses app.db and app.cache. The types guide it -- if the feature is enabled, the property exists and is fully typed. If it is not enabled, the property does not exist and the compiler says so.

// The builder chain tells the agent everything about the app's capabilities
const app = await createApp(config)
  .withDatabase()
  .withCache()
  .build();

// The agent writes a route -- types guide every property access
fastify.get('/users/:id', async (request) => {
  const { id } = request.params;
  const cached = await app.cache.get(`user:${id}`);
  if (cached) return cached;

  const user = await app.db
    .selectFrom('users')
    .where('id', '=', id)
    .selectAll()
    .executeTakeFirst();

  if (user) await app.cache.set(`user:${id}`, user, 300);
  return user;
});

The agent did not need to understand a module system, configure a dependency injection container, or find the right decorator. It read the builder chain, saw what was available, and used it. The types ensured correctness at every step.

But LLMs Don't Know Telaio

This is the fair objection, and it deserves an honest answer.

LLMs have been trained on millions of Express, Fastify, and NestJS examples. They have seen zero Telaio code in training data. When you ask an LLM to write NestJS code, it can draw on thousands of examples. When you ask it to write Telaio code, it cannot. This is a real gap.

But the gap is smaller than it appears, and the trade-off favors Telaio.

Telaio is Fastify. Route handlers, plugins, hooks, schemas, decorators -- all standard Fastify. The LLM already knows 80%+ of what it needs to write correct code in a Telaio project. The remaining 20% is a thin builder layer on top.

The builder pattern is self-documenting. A single createApp(config).withDatabase().withCache().build() chain communicates the app's capabilities in one line. An LLM does not need training data to understand a fluent builder -- it reads the chain, sees what is available, and the types confirm it. This is not a complex abstraction. It is a method chain that says exactly what it does.

Framework knowledge does not equal correct code. An LLM "knowing" NestJS does not stop it from misconfiguring dependency injection, wiring the wrong provider, or using a stale decorator pattern from an older version. Familiarity with a framework's API surface is not the same as producing correct code within it. Phantom types provide a correctness guarantee that training-data familiarity never will. The compiler does not care whether the LLM has seen the pattern before -- it cares whether the code is type-safe.

Project instructions close the gap. Modern AI coding tools -- Claude Code, Cursor, Windsurf, Copilot -- read project-level instructions (CLAUDE.md, .cursorrules, .github/copilot-instructions.md). Telaio projects carry their conventions in these files. The LLM learns the thin Telaio layer from the project itself, not from pre-training. A well-written CLAUDE.md with a few Telaio-specific notes is more reliable than hoping the LLM's training data included the right version of the right framework's documentation.

Every bespoke Fastify project is equally "unknown". Your custom Fastify project's wiring -- its specific decorators, its config loading pattern, its error handling, its graceful shutdown logic -- is just as unknown to the LLM as Telaio. The difference is that Telaio's conventions are documented and consistent across projects. Your bespoke wiring is documented only if you documented it. Between "an unknown but consistent pattern" and "an unknown and unique-to-this-project pattern," the LLM will perform better with the former every time.

On this page