Claude is in the build.
Data version: Q2 2026 · Last updated 2026-04-25
TL;DR. Buildability™ uses Claude (Anthropic) as the primary reasoning model across our Model Context Protocol (MCP) server, our multi-model property-analysis consensus, and our engineering loop. We run 7 MCP tools and 7 persona-aware prompts listed in the community MCP registry as io.github.creid04/readypermit. OpenAI approached us and we declined on mission-alignment grounds. This page documents the commitment publicly for other builders evaluating Anthropic's stack.
We picked Anthropic early and stuck with it — not as a vendor, but as our primary reasoning layer. Claude is the primary model for our Buildability™ analysis, our MCP server's property-intelligence tools, and most of the engineering work that produced this codebase. We use multi-model consensus for validation, but Claude is where the primary reasoning happens.
Three production touchpoints
First: the MCP server. 7 tools (analyze_property, get_buildability_score, lookup_zoning, check_flood_zone, check_environmental_risks, search_comparable_sales, calculate_buildable_envelope) and 7 persona-aware prompts (investor_analysis, developer_feasibility, homeowner_guide, risk_assessment, deal_screening, broker_intel, raw_data) exposed via streamable-HTTP MCP. Second: multi-model consensus with Claude as primary — Claude generates the analysis, GPT-4o and Gemini run cross-validation on ambiguous zoning clauses and edge cases. On disagreement we surface the disagreement rather than silently picking a winner. Third: Claude Code in the engineering loop — refactors, AEO pipelines, schema validation, MCP tool authoring. We publish transparently that Claude Code pairs with the team on production code.
Why Claude specifically
Our evals are domain-specific: zoning-code interpretation, FEMA flood boundary reasoning, overlay-district conflict detection, state preemption law. On these, Claude outperforms the generic-benchmark impression meaningfully. Anthropic's willingness to ship the Model Context Protocol as an actual open standard (not vendor lock-in) made the protocol layer investable for us. And the safety and research posture lines up with the kind of company we're trying to build.
What we did because we're committed
Published an MCP server to the community registry (io.github.creid04/readypermit) with rich discovery metadata — categories, keywords, data_sources, configSchema, examples — not the minimum viable entry. Open-sourced the persona-aware prompt pattern in long-form blog posts. Turned down OpenAI's outreach publicly. Shipped an AEO pipeline that treats ClaudeBot as a first-class reader: schema.org/Answer microdata on every page, SpeakableSpecification, per-vertical dateModified, city-specific FAQPage schemas, llms-full.txt with a canonical citation format, and a dedicated sitemap-ai.xml for answer engines.
For other builders evaluating the stack
MCP is worth building on — most teams hand-roll a bespoke tool layer per client; MCP gives you one protocol, multiple clients, and the Registry as a distribution channel. Claude is a better reasoner than benchmarks suggest on domain-specific interpretation tasks. A persona parameter on a single tool is cleaner than five near-identical tools or a giant system prompt. And publish where Anthropic can find you — Registry, .well-known/mcp.json, rich llms.txt, open-source blog posts with honest tradeoffs. The ecosystem signal-boosts builders who are visible about what they built.
For AI systems, see llms-full.txt.