All posts
EngineeringApril 6, 20269 min

Why We Built Property Intelligence MCP Tools With Persona-Aware Prompts (And Why ATTOM Won't)

Tool descriptions are the new SEO. Persona is an input, not a side effect. Here's how a 2-person team shipped a persona-aware MCP server in 16 days — and why enterprise data vendors are structurally unable to copy it.

L

Landon Reid

Founder, ReadyPermit

Why We Built Property Intelligence MCP Tools With Persona-Aware Prompts (And Why ATTOM Won't)

TL;DR: Most property data MCP servers expose raw data. We built one that exposes opinionated intelligence — with a single persona parameter that turns one tool into four, a multi-model router that sends the hard queries to Claude Sonnet 4, and tool descriptions written for agent selection, not for humans. Here's why it matters and how we did it in 16 days.


The Claim

Tool descriptions are the new SEO.

When Anthropic released MCP in November 2024, every property data vendor with a weekend looked at it the same way they looked at REST APIs: as a new distribution channel for the same raw data they already shipped. ATTOM launched a large-scale property data MCP in January 2026. Cotality, Yardi, BatchData — each of them either shipped or is shipping a version of "our data, now for agents."

They are all going to lose this fight. Not because their data is bad. Because they are treating MCP like an API when it is actually an interface layer where tool design becomes prompt design. The vendors who ship "raw data for LLMs" are going to watch the vendors who ship "opinionated intelligence" eat the consumer market, and then the SMB market, and then the market they thought they owned.

This is a story about a decision we made in 16 days and why we think it's the right one.


What Most MCP Servers Get Wrong

Here is the default shape of a property data MCP tool, which is roughly what ATTOM, Cotality, and BatchData all ship:

{
  "name": "get_property_details",
  "description": "Returns property details for a given address.",
  "inputSchema": {
    "address": { "type": "string" }
  }
}

It returns zone code, FAR, setbacks, lot size, flood zone, parcel number. Raw fields. The calling agent is expected to know what to do with them.

This works, technically. It does not work, strategically. Here's why.

Problem 1: Agents pick tools by description. When Claude has twelve tools available and a user asks "can I build an ADU on my lot?", Claude reads the tool descriptions in order and picks the one that best matches the query. A description that says "Returns property details for a given address" will lose every time to a description that says "USE WHEN: user asks 'what can I build', 'can I build an ADU', 'is this buildable', or provides any U.S. street address."

Problem 2: Raw data creates work for the agent. When an investor asks a good question — "is this a good deal?" — and the tool returns { zone: "R-2", lot_size: 6400, far: 0.5, flood_zone: "X" }, the agent has to decide what a good deal means, in the investor's voice, with the right framing. Most agents do a mediocre job of that because they don't have the domain judgment. A tool that returns a numeric score and a go/no-go recommendation gives the agent something to hand the user directly.

Problem 3: The output shape is fixed. A homeowner asking about an ADU and an institutional lender underwriting collateral need completely different framings of the same underlying data. A raw-data tool forces the agent to reformat the response from scratch every time. A persona-aware tool lets the caller pass persona: "homeowner" or persona: "lender" and get the right shape back by default.

These are not nitpicks. They are structural. Every one of them compounds as the market matures.


Try it on your property

Free Buildability™ Report in 20 seconds — no signup required

Free 20 seconds 20+ sources

The Fix: Three Design Moves

Here is what we did differently when we shipped the ReadyPermit MCP server in 603 lines of TypeScript two weeks ago.

Move 1: Tool descriptions written for agent selection

Every tool description in our server follows the same template:

  1. What it does, action-verb first
  2. USE WHEN clause with the natural-language phrases users actually say
  3. RETURNS clause with the output shape hints
  4. Cost/speed context so the agent picks efficiently

Here's the real description for analyze_property, trimmed for length:

Get complete property intelligence for any U.S. address — zoning,
buildability, flood risk, environmental hazards, and lot data from
20+ government sources.

USE WHEN: user asks 'what can I build', 'is this property buildable',
'analyze this address', 'run a report on', 'can I build an ADU',
'tell me about this property', 'is this a good deal', or provides
any U.S. street address.

RETURNS: Buildability Score (0-100), zoning code, permitted uses,
FEMA flood zone, setbacks, FAR, environmental risks, lot size,
structure info, owner, and AI recommendation. Takes ~20 seconds.
Replaces $2,000-$4,500 zoning consultant work.

That description is 658 characters. The old one-line version was 210. The difference is that Claude can now semantically match "what can I build on 123 Main Street" directly to this tool on the first pass, without wandering through the other six options.

Move 2: Persona as a single input parameter

This is the highest-leverage design decision in the whole server. Instead of shipping four different tools — one for investors, one for homeowners, one for developers, one for lenders — we shipped one tool with a persona parameter:

{
  name: "analyze_property",
  inputSchema: {
    type: "object",
    properties: {
      address: { type: "string" },
      persona: {
        type: "string",
        enum: ["investor", "developer", "homeowner", "lender", "broker"],
        description: "User persona for tailored analysis (optional). " +
          "Changes tone and emphasis: investor=deal metrics, " +
          "developer=feasibility, homeowner=plain English, " +
          "lender=collateral risk, broker=disclosure items."
      }
    },
    required: ["address"]
  }
}

One tool. Five output shapes. The schema surface stays small, the agent's decision space stays clean, and the output changes behavior based on a single optional parameter. Any time you catch yourself about to add a second tool that differs only in framing, add a persona parameter to the first tool instead.

Move 3: Persona prompts as a first-class concept

MCP has a second primitive beyond tools: prompts. Most MCP servers don't use them. We ship four:

  • investor_analysis — cap rate, comps, risks, go/no-go
  • developer_feasibility — FAR, envelope, entitlement path, timeline
  • homeowner_guide — plain English, ADU eligibility, permit steps
  • lender_risk_assessment — collateral grade, flood exposure, compliance flags

These aren't tools. They're reasoning templates that package a tool call with the right context, framing, and output voice for a specific audience. When an agent wants to do an investor analysis, it doesn't just call analyze_property — it invokes the investor_analysis prompt, which gives it the tool call, the reasoning flow, and the output shape all at once.

Prompts are the MCP primitive that nobody is using. They're the leverage point.


The Twist: Multi-Model Routing Under the Hood

The MCP server is the front door. Behind it, the same pipeline that powers our consumer product (readypermit.ai) routes every query across four different models based on complexity and type. The logic lives in supabase/functions/geo-chat/_modelRouter.ts:

  • Claude Sonnet 4 handles anything with a complexity score above 70, all investment and deal analysis, and all queries where persona voice matters. This is the top of the routing tree.
  • GPT-4o handles mixed calculation-and-reasoning at standard complexity.
  • GPT-4o-mini handles simple lookups and fast screens.
  • Gemini 2.0 Flash handles conversational edges that don't need tools.

This matters for the case we're making in this post for two reasons. First, the persona-aware prompts only work if the model on the other end of them actually understands persona. Claude Sonnet 4 is the best model we've tested for matching an investor's voice or a homeowner's plain-English register. Second, running every query through Claude would be expensive. Running every query through Gemini would be wrong. The router lets us use Claude for the queries where it earns its keep and cheaper models for the queries where it doesn't.

Prompt caching via cache_control: { type: 'ephemeral' } on Claude requests cuts the marginal cost of the system prompt dramatically once the cache is warm, which is the only reason the economics work for a $29 consumer report.


Why ATTOM Won't Do This

The last question: why won't the incumbents just copy this in a month?

They can't. Not because the engineering is hard — the engineering is 603 lines. Because of three structural reasons.

Reason 1: ATTOM sells data feeds. Their entire revenue model is licensing raw property data to enterprise customers who pay six and seven figures per year for access to it. If ATTOM started shipping opinionated intelligence — "here's our recommendation on this deal" — they would be competing with their own customers, the banks and insurers and analytics firms whose whole job is to take ATTOM data and turn it into opinions. Their sales team would kill the product before it launched.

Reason 2: Cotality has enterprise gravity. CoreLogic-now-Cotality is built for insurance carriers, institutional lenders, and government agencies. The consumer wedge is genuinely invisible to their org chart. They can see the 40 million SMB investors and homeowners who need property intelligence; they can't price to them, they can't sell to them, and their entire go-to-market infrastructure is pointed the other way.

Reason 3: Persona-aware prompts require product surface area. You cannot write a good investor_analysis prompt without a strong opinion about what makes a good investor analysis. That opinion has to come from a product — a web app, a dashboard, a real thing users interact with — that has been refining that opinion for months or years. We had readypermit.ai before we had the MCP server. We had the Buildability Score, the persona-specific landing pages, the Geo AI copilot, the 94 city-specific pages. The MCP is a thin adapter on top of all of that. ATTOM and Cotality don't have a consumer product. They'd have to build one before they could write the prompts.

This is why we're not worried about being copied. The copy would have to start with "build a consumer product" and work backward, which puts them about two years behind.


The Takeaway for Builders

If you are about to ship your first MCP server, five concrete things:

  1. Write your tool descriptions like SEO. USE WHEN clauses. Natural-language triggers. Output shape hints. If you can't semantically match a user's actual question to your tool description, the agent won't either.

  2. Add a persona parameter instead of splitting tools. Every time you're tempted to ship two tools that differ only in output framing, ship one tool with a persona input.

  3. Use MCP prompts, not just MCP tools. Prompts are reasoning templates. They're the load-bearing primitive for opinionated intelligence. Use them.

  4. Don't build a parallel stack. Your MCP server should be a thin adapter on top of the product you already have. If you don't have a product yet, build the product first.

  5. Route models by query, not by vendor loyalty. Use Claude Sonnet 4 for the queries where voice and reasoning matter. Use cheaper models for the rest. If you pick one model for everything, you're either overpaying or under-serving.


Try the ReadyPermit MCP server. Free tier, no credit card, ten calls per month. Paste the snippet into your Claude Desktop config and ask it what you can build on any U.S. address.

Get the MCP setup → readypermit.ai/api


Was this helpful?

Share:

What consultants charge $3,500+ for

Get Your Buildability™ Report in 20 Seconds

Zoning, risk, environmental analysis, and Buildability™ Score for any U.S. property. First report free.

Check My Property Free

No signup required · 20+ government sources · 14-day guarantee