Competitive Intelligence Report

OpenRouter: The AI Model Aggregator

How a marketplace-native routing layer is capturing inference demand at scale — and why The platform should list endpoints on it

February 16, 2026 Analyst: MinjAI Agents For: AI Infrastructure Strategy & Product Leaders
20 Footnoted Sources
Page 1 of 10

Executive Summary

OpenRouter is the largest independent AI model aggregator, providing a single unified API that routes developer requests across 500+ large language models from 60+ providers.[1] Founded in early 2023 by Alex Atallah, former CTO and co-founder of OpenSea, the company has grown from zero to over $100M in annualized inference spend flowing through its platform in under two years.[2] OpenRouter does not own GPUs or run inference infrastructure. It is a pure routing and aggregation layer that takes a 5-5.5% fee on pass-through inference spend.[3]

Backed by a16z, Menlo Ventures, and Sequoia, OpenRouter was valued at $500M after its Series A in April 2025.[4] Its partnership with a16z produced the landmark "State of AI" report analyzing 100 trillion tokens of real-world LLM usage, which revealed that inference demand is shifting decisively toward code (50%+ of paid tokens) and reasoning models (50%+ of all tokens).[5]

$500M[4]
Valuation (Apr 2025)
$40M[4]
Total Funding Raised
$100M+[2]
Annualized GMV (May 2025)
~$5M[6]
Annualized Revenue (May 2025)
5M+[5]
Developers Served
500+[1]
Models Available
100T+[5]
Tokens Processed (Cumulative)
10x[2]
GMV Growth (7 Months)
Strategic Implications — Distribution Channel Opportunity

OpenRouter is not a competitor to the platform. It is a potential distribution channel. OpenRouter does not own compute; it routes requests to inference providers. The platform can list its inference endpoints on OpenRouter to tap into 5M+ developer demand without building its own go-to-market motion from scratch. The a16z 100T token study confirms that inference demand is shifting toward code and reasoning workloads — exactly where The platform's low-latency, multi-chip architecture can differentiate.

Five Things Action Items

  1. Register as an OpenRouter provider. List the platform inference endpoints on OpenRouter's marketplace. Their provider integration docs are public.[7] This gives immediate access to millions of developers.
  2. Optimize for code + reasoning workloads. The a16z study shows programming exceeds 50% of paid-model traffic.[5] The platform should benchmark and optimize for these token-heavy, latency-sensitive patterns.
  3. Compete on price-performance to win routing share. OpenRouter routes by default to the cheapest provider.[8] The platform's 30-50% cost advantage over hyperscalers would make it the preferred route.
  4. Track OpenRouter analytics for market intelligence. OpenRouter publishes model usage trends and token volumes. Use this data to inform which models to host and which workloads to target.
  5. Evaluate enterprise co-selling. OpenRouter is building enterprise features (EU routing, org policies, volume discounts).[9] the platform could be a preferred sovereign/on-prem provider for enterprise deals.
Page 2 of 10

Company Overview

Founding Story

OpenRouter was founded in February/March 2023 by Alex Atallah, immediately following the release of Meta's LLaMA and Stanford's Alpaca models, which demonstrated that competitive AI models could be built outside major labs.[10] Atallah had previously co-founded OpenSea in 2018 with Devin Finzer and served as CTO during its meteoric rise to a $13.3B valuation. He stepped down in July 2022 to "build something from zero to one."[10]

Atallah's thesis: if training a large AI model costs as little as $600, the future will have tens of thousands or hundreds of thousands of models — and they will all need their own marketplace.[10] OpenRouter is that marketplace. He brought marketplace-building DNA directly from OpenSea, applying the same aggregation playbook to AI inference that OpenSea applied to NFTs.

Leadership

NameTitleBackground
Alex AtallahCEO, Co-Founder[10]Co-founded OpenSea (CTO), Stanford CS, built NFT marketplace to $13.3B valuation
Louis VichyCo-Founder[11]Technical co-founder

The company operates with a lean team. As of mid-2025, OpenRouter has fewer than 25 employees,[11] reflecting its asset-light business model. The platform generates over $100M in GMV with minimal headcount, a hallmark of marketplace efficiency.

Timeline: From OpenSea to OpenRouter

Feb 2023
Founded after Meta LLaMA release. Built unified API for open-source models.[10]
2023-2024
Bootstrapped growth. Expanded model catalog to 300+ models from 60+ providers. Reached ~$10M annualized GMV by October 2024.[2]
Feb 2025
Closed $12.5M seed round led by Andreessen Horowitz.[4]
Apr 2025
Closed $28M Series A led by Menlo Ventures. Valued at $500M. Sequoia, Figma, Fred Ehrsam participated.[4]
May 2025
Monthly GMV hit $8M ($100M+ annualized). 10x growth from Oct 2024.[2]
Jun 2025
Announced $40M combined funding. Stripe partnership for global payments.[12] Collaborated with OpenAI on stealth GPT-4.1 launch.[13]
Dec 2025
Published "State of AI" report with a16z, analyzing 100T+ tokens. Processing 1T+ tokens per day.[5]
Feb 2026
5M+ developers served. 500+ models available. GPT-5 and Gemini 2.5 Flash live on platform.[13]

Funding History

RoundDateAmountValuationLead Investors
Seed[4]Feb 2025$12.5MUndisclosedAndreessen Horowitz
Series A[4]Apr 2025$28M$500MMenlo Ventures
Total$40Ma16z, Menlo, Sequoia, Figma
Investor Signal

a16z, Menlo Ventures, and Sequoia all invested in OpenRouter. These are the same firms backing frontier AI labs (Anthropic, OpenAI). Their bet on OpenRouter signals conviction that the aggregation/routing layer will be a durable, high-margin business as inference becomes commoditized. For the platform, this validates the multi-model, multi-provider future that the inference platform is designed for.

Page 3 of 10

Business Model Deep Dive

The Aggregator Flywheel

OpenRouter operates a classic two-sided marketplace. On the demand side: developers and enterprises who need inference. On the supply side: model providers and inference platforms who serve tokens. OpenRouter sits in the middle, routing requests and taking a cut.

Developers
5M+ users
OpenRouter
Routing + Billing
Inference Providers
60+ providers
Platform (Opportunity)
List as Provider

Revenue Model

Revenue StreamMechanismRate
Platform Fee (Primary)Percentage of inference spend flowing through the API5.5% on card purchases; 5.0% on crypto[3]
BYOK FeeUsage fee when developers bring their own provider API keys5% (transitioning to monthly subscription)[3]
Enterprise PlansCustom pricing with volume discounts, annual commits, invoicingNegotiated[9]

Financial Trajectory

MetricOct 2024May 2025Growth
Monthly GMV (Inference Spend)~$800K$8M10x in 7 months
Annualized GMV~$10M$100M+10x
Annualized Revenue (est.)~$1M~$5M5x[6]
Daily Token Volume--1T+ (Dec 2025)--
Unit Economics Note

OpenRouter's ~5% take rate on $100M GMV produces approximately $5M in annual revenue.[6] With fewer than 25 employees and no GPU capex, the company likely operates at or near breakeven with strong gross margins (>80%). This is the economics of a pure marketplace, not an infrastructure company. For comparison, The platform's model requires heavy capex but captures much larger revenue per token served.

How the Marketplace Works

Demand Side (Developers)

Supply Side (Providers)

Page 4 of 10

Technical Architecture

Platform Stack

Layer 4: Developer Experience
Unified API (OpenAI-compatible)
Model Explorer (500+ models)
Web Dashboard & Analytics
SDKs & Framework Integrations
BYOK (Bring Your Own Key)
Layer 3: Routing & Intelligence
Smart Router (price/latency/uptime optimization)[8]
Auto-Failover (cross-provider fallback)
Load Balancing (weighted by inverse price squared)
Regional Routing (EU in-region)
Provider Health Monitoring
Rate Limit Management
Layer 2: Billing & Identity
Stripe Integration (Global payments)[12]
Prepaid Credit System
Crypto Payments (5% fee)
Enterprise Invoicing
Org-Level Policies & Keys
Usage Analytics & Attribution
Layer 1: Provider Network (60+ Providers)
OpenAI (GPT-4o, GPT-5)
Anthropic (Claude 4, Sonnet)
Google (Gemini 2.5)
Meta (Llama 3.x, 4)
Mistral
DeepSeek
Together AI
Fireworks AI
+ 50 more providers

Routing Logic

When a developer sends a request, OpenRouter's routing engine decides which provider to forward it to. The default behavior:[8]

  1. Filter: Identify all providers hosting the requested model
  2. Score: Weight providers by inverse square of price (cheapest gets most traffic), adjusted for uptime and latency
  3. Route: Send request to the highest-scoring available provider
  4. Fallback: If the primary provider fails or is rate-limited, automatically retry with the next-best provider
  5. Bill: Charge the developer at the successful provider's rate + platform fee
Strategic Implication: Win Routing Share Through Price

Because OpenRouter defaults to routing by lowest price, The platform's stated 30-50% cost advantage over hyperscalers would make it the default routing target for any models it serves. If The platform can serve Llama 3.3 70B at $0.30/M input tokens while Together AI charges $0.54/M, OpenRouter would route the majority of Llama 3.3 70B traffic to the platform automatically. This is the single most important insight for The platform's distribution strategy.

Key Technical Features

FeatureDescriptionStrategic Relevance
Streaming (SSE)Server-sent events for real-time token delivery[1]Platform endpoints must support SSE
MultimodalText, images, PDFs via same APIPlan for multimodal inference
Provider HealthUptime = successful / total requests[7]high-availability target aligns perfectly
Model MetadataPricing, quantization, context length exposed[7]The platform must publish accurate specs
BYOK1M free requests/month for BYO key users[14]Developers could route BYOK to the platform
Page 5 of 10

The a16z 100 Trillion Token Study

In December 2025, OpenRouter and a16z published the "State of AI" report, the largest empirical study of real-world LLM usage ever conducted, analyzing over 100 trillion tokens from billions of interactions across the OpenRouter platform.[5] This study is critical market intelligence for the platform.

Key Findings

1. Programming Dominates Paid Inference

Programming surged from 11% of total token volume in early 2025 to over 50% by November 2025.[5] Developer prompts routinely exceed 20,000 input tokens for tasks like code generation, debugging, and full-stack scripting. Claude owns approximately 60% of coding workloads.

Opportunity

Code generation workloads are token-heavy and latency-sensitive. Average prompts exceed 20K tokens. This is precisely the workload profile where The platform's ultra-low-latency target creates measurable value. If The platform can serve code-optimized models (DeepSeek Coder, CodeLlama, StarCoder) at competitive latency, it captures the fastest-growing segment.

2. Reasoning Models Now Dominate

Reasoning-optimized models climbed from negligible usage in Q1 2025 to over 50% of all tokens by late 2025.[5] Users increasingly prefer models that can manage task state, follow multi-step logic, and support agent-style workflows.

3. Agentic Inference is the Fastest-Growing Pattern

Developers are building workflows where models act in extended sequences: planning, retrieving context, revising outputs, iterating until task completion.[5] This shifts demand from single-shot completions to sustained, multi-turn sessions requiring consistent low latency.

4. Open-Source Models Growing Fast

Open-weight models reached 33% of total usage by late 2025. Chinese open-source models (DeepSeek, Qwen, Kimi) averaged 13% weekly volume after growing from 1.2%.[5]

Token Volume Distribution (Late 2025)

Use CaseShare of Paid TokensTrend
Programming / Code>50%Rapidly growing, 20K+ avg input tokens
Reasoning / Agentic>50% (of all tokens)From near-zero in Q1 2025
Roleplay / Creative>50% of open-source usageStable, dominates free-tier
General ChatDeclining shareGiving way to specialized tasks
Strategic Takeaway for the inference platform

The 100T token study confirms three things for the platform: (1) inference demand is real and accelerating, not speculative; (2) the highest-value workloads are code and reasoning, which require low latency and high throughput; (3) open-source models are capturing meaningful share, which means The platform can serve popular models without licensing barriers. The platform should anchor its model catalog around code-optimized and reasoning models.

Page 6 of 10

Pricing and Economics Analysis

OpenRouter Pricing Structure

OpenRouter passes through the underlying provider's per-token price and adds its platform fee on top. Developers see the provider's price on the model catalog and pay the same rate (plus the platform fee at checkout).[3]

Pricing ComponentRateNotes
Model Price (Input Tokens)Varies by model/providerPass-through, no markup
Model Price (Output Tokens)Varies by model/providerPass-through, no markup
Platform Fee (Card)5.5% of credit purchaseMin $0.80 per purchase[3]
Platform Fee (Crypto)5.0% flatNo minimum[3]
BYOK Fee5% of usage1M free requests/month[14]
EnterpriseCustomVolume discounts, annual commits, invoicing[9]

Sample Model Pricing on OpenRouter

ModelInput (per 1M tokens)Output (per 1M tokens)Context Window
GPT-4o$2.50$10.00128K
Claude 3.5 Sonnet$3.00$15.00200K
Llama 3.3 70B (Together)$0.54$0.54128K
DeepSeek V3$0.14$0.28128K
Mistral Large$2.00$6.00128K
Gemini 2.0 Flash$0.10$0.401M

Economics: Marketplace vs. Infrastructure

OpenRouter (Marketplace)

  • Capex: Near zero (no GPUs)
  • Gross Margin: ~80-90% (pure software)
  • Revenue per Token: ~5% of pass-through
  • Scaling: Linear with volume
  • Risk: Provider dependency
  • Moat: Network effects, data

Platform (Infrastructure Provider)

  • Capex: Heavy (GPUs, DCs, chips)
  • Gross Margin: Target >40%
  • Revenue per Token: Full token price
  • Scaling: Requires capacity buildout
  • Risk: Utilization, demand gen
  • Moat: Cost structure, latency
Complementary, Not Competitive

OpenRouter captures 5% of each token. the platform captures 100% of the inference price. These are complementary positions in the value chain, not competitive ones. OpenRouter needs cheap, reliable providers to route to. The platform needs demand. A partnership creates value for both sides. The risk for The platform is commoditization: if OpenRouter drives all routing decisions, providers compete purely on price. the platform must maintain differentiation through latency, availability, and sovereign deployment options.

Page 7 of 10

Competitive Positioning

Where OpenRouter Sits in the Stack

The AI inference value chain has four layers. OpenRouter operates at the routing/aggregation layer, which sits between developers and inference infrastructure providers like the platform.

LayerPlayersValue Captured
Application LayerEnd-user apps (ChatGPT, Cursor, Claude.ai)Subscription revenue
Routing/AggregationOpenRouter, Portkey, Martian, LiteLLM5% platform fee
Inference ProviderTogether AI, Fireworks AI, Groq, Platform (target)Per-token pricing
Hardware/CloudNVIDIA, AWS, GCP, CoreWeave, CrusoeGPU-hour pricing

OpenRouter vs. Competitors

FeatureOpenRouterPortkeyMartianLiteLLM (OSS)
Model Count500+1,600+ (via connections)Limited100+
Developer Base5M+UndisclosedUndisclosedOSS community
Business Model% of spendSaaS subscriptionUsage-basedFree / self-host
Routing IntelligencePrice/latency/uptimeObservability-firstCost optimizationBasic fallback
Enterprise FeaturesGrowingStrongModerateDIY
Funding$40M (a16z, Menlo)$23M$32M$19M
Key DifferentiatorScale + dataObservabilitySmart routingOpen source

OpenRouter vs. Direct Inference Providers

DimensionOpenRouterTogether AIFireworks AIGroq
CategoryAggregatorProviderProviderProvider
Owns GPUsNoYesYesYes (LPU)
Model Selection500+~100~50~20
Pricing PowerPass-throughSets own pricesSets own pricesSets own prices
RelationshipRoutes TO providersCompetes on priceCompetes on priceCompetes on speed
Threat to the platformLow (Partner)MediumMediumHigh (Speed)
Threat Assessment: MEDIUM (Distribution Opportunity)

OpenRouter's threat level to The platform is MEDIUM but the nature of the threat is unusual. OpenRouter does not compete for inference workloads directly. The risk is that OpenRouter commoditizes the provider layer by making it trivially easy for developers to switch between providers. The opportunity is that OpenRouter solves The platform's demand generation problem by providing instant access to millions of developers. Net assessment: partnership value significantly exceeds threat value.

Page 8 of 10

Strategic Implications

The Distribution Channel Thesis

The platform's biggest challenge for the inference platform is not technology but demand generation. Building a developer-facing inference platform from zero requires years of go-to-market effort, developer relations, and brand building. OpenRouter offers an immediate shortcut.

Platform Inference
H100/H200/alternative silicon
OpenRouter
Provider Integration
5M+ Developers
Instant Distribution

Why This Works for the platform

  1. Zero customer acquisition cost. OpenRouter has already aggregated 5M+ developers. the platform pays nothing to reach them — it just needs to be the cheapest/fastest provider for the models it hosts.
  2. Price-based routing favors the platform. OpenRouter's default routing algorithm prioritizes the cheapest provider.[8] The platform's 30-50% cost advantage over hyperscalers means it would naturally capture the majority of routed traffic for any model it serves.
  3. Multi-chip architecture is a differentiator. The platform serves inference across NVIDIA H100/H200, and alternative silicon chips. This hardware diversity allows the platform to optimize for different model architectures and offer price points that single-GPU providers cannot match.
  4. Sovereign deployment creates premium tier. OpenRouter supports EU in-region routing for enterprise.[9] The platform's sovereign-ready infrastructure can serve as the preferred provider for compliance-sensitive workloads.
  5. Market intelligence for free. OpenRouter's public model usage data and the a16z 100T token study provide the platform with real-time market intelligence on which models are trending, which workloads are growing, and where to invest.

Implementation Roadmap

PhaseActionTimelineExpected Outcome
1Register as OpenRouter provider. Implement models endpoint per their spec.[7]Week 1-2Listed on OpenRouter marketplace
2Launch 3 popular open-source models (Llama 3.3, DeepSeek V3, Mistral) at below-market pricingWeek 3-4Begin receiving routed traffic
3Optimize uptime to 99.9%+ and monitor OpenRouter's provider health dashboardOngoingIncrease routing share
4Add code-optimized models (DeepSeek Coder, StarCoder) based on a16z study insightsMonth 2Capture highest-growth segment
5Negotiate enterprise co-selling arrangement for sovereign/EU compliance dealsMonth 3Premium revenue stream
Risk: Commoditization Trap

The primary risk of relying on OpenRouter for distribution is that it commoditizes inference providers. If The platform is just another row in a pricing table, the only differentiator is cost. To avoid this trap, The platform should: (1) maintain direct enterprise relationships in parallel, (2) differentiate on latency and availability, not just price, (3) build sovereign/compliance capabilities that aggregators cannot easily replicate, and (4) use OpenRouter as a demand generation channel, not the primary go-to-market strategy.

Page 9 of 10

Market Context: The Inference Economy

Market Size

The global AI inference market is projected to grow from $106B in 2025 to $255B by 2030, with a CAGR of 19.2%.[15] Inference workloads will account for roughly two-thirds of all compute by 2026, up from one-third in 2023.[16] The market for inference-optimized chips alone will exceed $50B in 2026.

Inference Value Chain Economics

LayerExample PlayersEst. Market Size (2026)Margin Profile
Aggregation/RoutingOpenRouter, Portkey$1-5B (est.)80-90% gross margin
Inference-as-a-ServiceThe platform, Together AI, Fireworks$20-40B (est.)40-60% gross margin
GPU Cloud / IaaSCoreWeave, Crusoe, Lambda$50-80B (est.)30-50% gross margin
Hardware / ChipsNVIDIA, AMD, alternative silicon$50B+ (est.)60-70% gross margin

Key Market Trends Relevant to the platform

1. Multi-Model is the Default

The a16z study shows no single model dominates all workloads.[5] Enterprises use 4+ models on average. This validates The platform's multi-model, multi-chip strategy and makes aggregators like OpenRouter structurally important.

2. Agentic Workloads Drive Token Volume

Agentic inference (multi-step, tool-using, iterative) is the fastest-growing pattern on OpenRouter.[5] These workloads generate 10-100x more tokens per session than simple chat. For the platform, this means higher revenue per customer and greater importance of sustained low latency.

3. Open-Source Models Enable Provider Arbitrage

Open-weight models (Llama, DeepSeek, Mistral, Qwen) reached 33% of usage.[5] Since anyone can host these models, the competition is purely on cost and performance. This is where The platform's hardware cost advantage creates the largest wedge.

4. Enterprise Requires Compliance

OpenRouter added EU in-region routing for enterprise customers.[9] The platform's sovereign-ready infrastructure positions it to be the preferred backbone for compliance-sensitive workloads routed through aggregators.

Market Intelligence Advantage

OpenRouter's public data (model usage trends, pricing, token volumes) is effectively free market research for the platform. Recommendation: set up automated tracking of OpenRouter's model catalog, pricing changes, and new model additions. Use this data to inform which models to host and at what price points.

Page 10 of 10

Conclusions and Strategic Recommendations

Summary Assessment

DimensionAssessment
Threat LevelMEDIUM — Commoditization risk, not direct competition
Opportunity LevelHIGH — Distribution channel for 5M+ developers
Strategic PosturePartner — List as provider, not compete as aggregator
Time SensitivityHigh — First-mover advantage for routing share
Investment RequiredLow — API integration only, no new infrastructure

Top 5 Recommendations

  1. List the platform as an OpenRouter provider within 30 days.

    Implement the provider integration spec.[7] Start with 3 popular open-source models. Target below-market pricing to win default routing. This is the highest-ROI distribution move available to the platform today.

  2. Build for code + reasoning workloads first.

    The a16z 100T study proves these are 50%+ of paid inference.[5] Optimize The platform's serving stack for large-context, token-heavy code generation patterns. Benchmark TTFT and throughput on DeepSeek Coder, Llama, and StarCoder.

  3. Use OpenRouter data as a demand signal.

    Track model popularity, pricing trends, and provider availability on OpenRouter's public catalog. Let this data drive The platform's model hosting decisions rather than guessing market demand.

  4. Differentiate on sovereign compliance, not just price.

    OpenRouter's enterprise tier includes EU routing.[9] The platform should position as the preferred sovereign inference backbone for enterprise deals that require on-prem or regulated-region deployment. This creates pricing power beyond commodity routing.

  5. Maintain direct enterprise GTM in parallel.

    OpenRouter is a channel, not a strategy. the platform must build direct enterprise relationships (the design partner strategy) independently. Use OpenRouter for volume/developer demand and direct sales for enterprise/margin.

Bottom Line

OpenRouter is the Stripe of AI inference: a thin, high-margin routing layer that does not compete with infrastructure providers but makes them accessible to millions of developers. For the platform, it represents the fastest path to demand generation for the inference platform. Listing as an OpenRouter provider is low-cost, low-risk, and directly addresses The platform's biggest challenge: getting inference endpoints in front of paying developers. The a16z 100T token study validates that the workloads The platform is optimized for (code, reasoning, agentic) are the ones growing fastest. Act now.

Sources & Footnotes

  1. [1] OpenRouter. "About OpenRouter — The Unified Interface for LLMs." openrouter.ai/about
  2. [2] Sacra Research. "OpenRouter at $100M GMV." Jul 2025. sacra.com/research/openrouter-100m-gmv
  3. [3] OpenRouter. "Pricing." openrouter.ai/pricing
  4. [4] GlobeNewsWire. "OpenRouter raises $40 million to scale up multi-model inference for enterprise." Jun 2025. globenewswire.com
  5. [5] Andreessen Horowitz & OpenRouter. "State of AI: An Empirical 100 Trillion Token Study." Dec 2025. a16z.com/state-of-ai
  6. [6] Sacra Research. "OpenRouter revenue, growth, and valuation." sacra.com/c/openrouter
  7. [7] OpenRouter. "Provider Integration: Add Your AI Models to OpenRouter." openrouter.ai/docs/guides/guides/for-providers
  8. [8] OpenRouter. "Provider Routing: Intelligent Multi-Provider Request Routing." openrouter.ai/docs/guides/routing/provider-selection
  9. [9] OpenRouter. "Pricing — Enterprise Plans." openrouter.ai/pricing
  10. [10] The Block. "OpenSea co-founder Alex Atallah raises $40 million for AI startup OpenRouter." Jun 2025. theblock.co
  11. [11] Tracxn. "OpenRouter — 2025 Company Profile, Team, Funding & Competitors." tracxn.com
  12. [12] Stripe. "Stripe powers OpenRouter's global AI model access for millions of developers." stripe.com/newsroom
  13. [13] OpenRouter. "Announcements and Blog." openrouter.ai/announcements
  14. [14] OpenRouter. "1 million free BYOK requests per month." openrouter.ai/announcements
  15. [15] MarketsandMarkets. "AI Inference Market Size, Share & Growth, 2025 to 2030." marketsandmarkets.com
  16. [16] Deloitte. "Why AI's next phase will likely demand more computational power, not less." deloitte.com
  17. [17] Puter Developer Encyclopedia. "OpenRouter." developer.puter.com
  18. [18] Andreessen Horowitz. "Investing in OpenRouter." a16z.com/announcement/investing-in-openrouter
  19. [19] Skywork AI. "OpenRouter Review 2025: API Gateway, Latency & Pricing Compared." skywork.ai
  20. [20] arXiv. "State of AI: An Empirical 100 Trillion Token Study with OpenRouter." arxiv.org