Competitive Intelligence Report

Lambda: The Superintelligence Cloud

How a facial-recognition startup became NVIDIA's largest GPU cloud customer — and why The platform should view them as a supply partner, not a threat

February 16, 2026 Analyst: MinjAI Agents For: AI Infrastructure Strategy & Product Leaders
24 Footnoted Sources
Page 1 of 10

Executive Summary

Lambda is a GPU cloud provider that rents NVIDIA GPUs on-demand to AI researchers, startups, and enterprises for training and inference workloads.[1] Founded in 2012 as a facial recognition startup by twin brothers Stephen and Michael Balaban,[2] the company pivoted to GPU infrastructure in 2017-2018 and has since grown into one of the leading "GPU-first" cloud platforms, branded as the "Superintelligence Cloud."[3]

Lambda's core value proposition is simple: pure GPU rental with zero egress fees, fast provisioning, and InfiniBand networking.[4] Unlike Crusoe or CoreWeave, Lambda does not own data centers or energy assets. It operates a capital-light model, leasing colocation capacity from partners like Aligned, Cologix, and EdgeConneX.[5] This makes Lambda fundamentally a GPU aggregator and cloud orchestration layer, not a vertically integrated infrastructure player.

$5.9B[6]
Valuation (Nov 2025)
$505M[7]
ARR (May 2025)
$2.3B+[6]
Total Funding Raised
~700[8]
Employees (2026)
10,000+[6]
Customers
15+[5]
Data Centers (Leased)
H2 2026[9]
Planned IPO
LOW
Threat to the platform
Opportunity for the platform

Lambda is not building managed inference — they deprecated their Inference API and Lambda Chat assistant in September 2025.[10] Their strategic direction is pure GPU rental at scale. This makes Lambda a potential GPU supply partner for the platform, not a direct competitor on inference-as-a-service. The platform should monitor for any re-entry into inference APIs, but the current trajectory is clear: Lambda sells GPUs by the hour, the platform sells intelligence by the token.

Five Key Takeaways Strategic Implications

  1. Lambda exited inference. They deprecated their Inference API in Sep 2025.[10] The platform's managed inference platform faces no competition from Lambda today.
  2. Lambda is NVIDIA's largest GPU cloud customer. The $1.5B leaseback deal[11] and multibillion-dollar Microsoft partnership[12] prove Lambda can source GPUs at scale. Explore a supply relationship.
  3. Lambda does not own infrastructure. No data centers, no energy assets.[5] This is a structural weakness The platform can exploit through its own energy ownership and sovereign-ready deployments.
  4. IPO is imminent. Morgan Stanley, JPMorgan, and Citi are hired. H2 2026 target.[9] Post-IPO Lambda will be flush with capital; monitor for product expansion.
  5. Zero egress fees set market expectation. Lambda and Crusoe both offer zero egress.[4] the platform must match this or justify the premium.
Page 2 of 10

Company Overview and History

Leadership Team

NameTitleBackground
Stephen BalabanCo-Founder & CEO[2]University of Michigan (CS & Economics). First engineering hire at Perceptio (acquired by Apple).[2]
Michael BalabanCo-Founder & CTO[2]University of Michigan (Math & CS). Twin brother of Stephen. Technical architecture lead.[2]
Peter SeiboldCFO[8]Financial strategy and IPO preparation
Paul MiltenbergerVP of Finance[8]Financial operations
David HallVP of NVIDIA Solutions[8]Key NVIDIA relationship management
Robert Brooks IVVP of Revenue[8]Sales and go-to-market
Ariel NissanGeneral Counsel[8]Legal and compliance
Leadership Signal

Lambda's leadership team is founder-led and engineering-heavy. There is no SVP of Product, no Head of Inference, no CPO. This confirms their strategic focus on infrastructure rather than managed AI services. Compare this to Crusoe, which has an SVP of Product and is actively hiring 8+ PMs including a Staff PM for Managed Inference.

Timeline: From Face Recognition to GPU Cloud

2012
Founded in San Francisco by Stephen and Michael Balaban as a facial recognition startup. Built face-ID APIs including a Google Glass app.[2]
2017-2018
Pivoted to GPU infrastructure. Launched Lambda Stack (deep learning software suite: TensorFlow, PyTorch, CUDA) and began selling GPU workstations and servers.[2]
2018-2020
Lambda GPU Cloud launched. Enterprise hardware sales to Apple, Amazon, Raytheon, MIT, DoD.[2] Revenue hit $15M.
Jul 2021
Series A: $15M raised.[13] Focus shifts from hardware sales to cloud GPU rental.
Mar 2023
Series B: $44M from Mercato Partners.[13] Revenue hits $250M. 5,000 customers.
Feb 2024
Series C: $320M at $1.5B valuation. Led by Thomas Tull's US Innovative Technology Fund. Revenue: $425M.[13]
Feb 2025
Series D: $480M at $4B+ valuation. Co-led by Andra Capital and SGW. NVIDIA, ARK Invest, In-Q-Tel participate.[14]
Sep 2025
NVIDIA signs $1.5B leaseback deal for 18,000 GPUs over 4 years.[11] Lambda Inference API and Lambda Chat deprecated.[10]
Nov 2025
Multibillion-dollar Microsoft deal for GPU infrastructure.[12] Series E: $1.5B+ from TWG Global at $5.9B valuation.[6]
Jan 2026
Kansas City AI Factory announced: $500M, 24MW initial, 10,000+ Blackwell Ultra GPUs. Plans for 100MW expansion.[15]
H2 2026
Planned IPO. Morgan Stanley, JPMorgan, Citi hired.[9] Pre-IPO $350M round in discussions (Mubadala Capital).[9]
Page 3 of 10

Funding History and Financial Profile

Capital Raises

RoundDateAmountValuationLead Investors
Seed (multiple)[13]2015-2018~$4M--Gradient Ventures, 1517 Fund, Bloomberg Beta
Series A[13]Jul 2021$15M (+$9.5M debt)----
Series B[13]Mar 2023$44M--Mercato Partners
Series C[13]Feb 2024$320M$1.5BUS Innovative Technology (Thomas Tull), B Capital, SK Telecom, T. Rowe Price
Series D[14]Feb 2025$480M$4B+Andra Capital, SGW, NVIDIA, ARK Invest, In-Q-Tel
Series E[6]Nov 2025$1.5B+$5.9BTWG Global (Thomas Tull & Mark Walter), USIT
Pre-IPO (planned)[9]2026~$350MTBDMubadala Capital (in discussions)
Total Raised~$2.3B+

Revenue Trajectory

YearRevenueYoY GrowthCustomersKey Driver
2022~$20M (est.)--~2,000Hardware sales + early cloud
2023$250M[7]~1,150%5,000+Cloud GPU rental explosion (ChatGPT-driven demand)
2024$425M[7]70%10,000+Enterprise + AI lab adoption
2025 (annualized)$505M+[7]~19%10,000+NVIDIA leaseback + Microsoft deal
Revenue Deceleration Warning

Lambda's revenue growth decelerated sharply from ~1,150% (2023) to 70% (2024) to ~19% (H1 2025 annualized). This is consistent with GPU cloud pricing compression industry-wide. The NVIDIA leaseback and Microsoft deals provide revenue backstops, but Lambda will need to demonstrate renewed acceleration for a successful IPO. For the platform: This confirms that raw GPU rental is commoditizing. Managed inference margins will hold better.

Key Financial Relationships

NVIDIA Leaseback ($1.5B, 4 years)[11]

NVIDIA signed a $1.5B deal to lease back 18,000 of its own GPUs from Lambda. The deal consists of two components: a $1.3B four-year lease for 10,000 GPUs and a $200M arrangement for 8,000 additional processors. This makes NVIDIA Lambda's single largest customer and provides predictable, locked-in revenue ahead of the IPO.

Microsoft Multibillion-Dollar Deal (Nov 2025)[12]

Lambda will deploy tens of thousands of NVIDIA GPUs, including GB300 NVL72 systems, in Lambda's liquid-cooled U.S. data centers for Microsoft's AI workloads. The deal cements Lambda as infrastructure-for-hire to hyperscalers, not a competitor to them.

Strategic Implication

Lambda's business model is increasingly that of a GPU broker and infrastructure lessor to hyperscalers and NVIDIA itself. This is a fundamentally different business from The platform's inference-as-a-service. Lambda sells capacity; The platform sells capability. They could be complementary.

Page 4 of 10

Product Architecture and Technical Stack

Lambda offers four product tiers: self-serve GPU cloud instances, 1-Click Clusters for large-scale training, a Private Cloud for dedicated enterprise environments, and Echelon for on-premise deployments.[1]

Layer 4: Managed Services (Deprecated)[10]
Lambda Inference API Deprecated Sep 2025[10]
Lambda Chat Sunsetted Sep 2025[10]
Layer 3: GPU Cloud Platform[1]
On-Demand Instances (B200, H100, A100, GH200)[1]
1-Click Clusters (16-512 GPUs, InfiniBand)[4]
Superclusters (up to 165K GPUs, single-tenant)[3]
Private Cloud (Dedicated in partner DCs)[5]
Lambda Stack (PyTorch, TF, CUDA, cuDNN)[2]
Persistent Storage (per instance)
SSH Access, Jupyter, API
Layer 2: Hardware Products[16]
Lambda Echelon (On-prem GPU cluster, 40-1000s GPUs)[16]
Lambda Hyperplane Servers (4/8/16 GPU)[16]
Lambda Scalar Workstations[2]
InfiniBand HDR 200Gb/s Fabric[16]
Layer 1: Infrastructure (Leased)
Aligned (DFW)[5]
Cologix (SF, Allen TX, Columbus OH)[5]
EdgeConneX (Chicago, Atlanta, 30MW+)[5]
Kansas City (Owned, 24-100MW)[15]
Prime (LAX01, California)[5]
15+ locations across US[5]
Key Architecture Insight

Lambda's product stack has a conspicuous gap at the top: no managed inference layer, no model serving, no API gateway, no KV-cache optimization. Their inference API deprecation in September 2025 was a deliberate strategic retreat to focus on raw compute.[10] This is the exact layer The platform is building with the inference platform. Lambda sells the GPUs; the platform could buy them and sell intelligence on top.

Page 5 of 10

Technical Specifications and Pricing

1-Click Clusters: Technical Details[4]

SpecificationDetail
GPU OptionsNVIDIA HGX B200 SXM6, H100 SXM[4]
Cluster Size16 to 512 GPUs per cluster[4]
GPU InterconnectNVIDIA Quantum-2 400 Gb/s InfiniBand, rail-optimized topology[4]
RDMA BandwidthUp to 3,200 Gb/s per node (GPUDirect RDMA)[4]
Ethernet2x100 Gb/s per node (IP communication)[4]
Direct Internet2x100 Gb/s DIA connections per cluster[4]
Management Nodes3x CPU head nodes (jump boxes, Slurm scheduling)[4]
SoftwareLambda Stack (PyTorch, TF, CUDA, cuDNN preloaded)[2]
SHARP AccelerationYes (InfiniBand in-network computing)[4]
Egress Fees$0 (zero egress, zero ingress)[4]

Superclusters[3]

SpecificationDetail
ScaleUp to 165,000 NVIDIA GPUs per cluster[3]
GPU TypesGB200/GB300 NVL72, HGX B200, H100[3]
TenancySingle-tenant, dedicated infrastructure[3]
NetworkingQuantum-class InfiniBand[3]
CoolingLiquid-cooled racks[3]
Use CaseLarge-scale AI training (frontier models)

GPU Cloud Pricing[17]

GPUMemoryOn-Demand ($/hr)Notes
NVIDIA HGX B200192 GB HBM3e$4.99[17]2x VRAM of H100, 3x faster training, 15x faster inference
1-Click Cluster (B200)192 GB$4.49/GPU[4]16-512 GPUs, InfiniBand included
NVIDIA H100 SXM80 GB~$2.49Pricing varies; mid-range for market
NVIDIA A100 SXM80 GB~$1.10[17]Legacy GPU, still popular for inference
NVIDIA GH20096 GBContact salesGrace Hopper combined CPU+GPU

Pricing Comparison: Lambda vs. Peers

FeatureLambdaCrusoeCoreWeaveAWS
H100 On-Demand~$2.49/hr$3.90/hr$3.99/hr~$5.00+/hr
A100 On-Demand~$1.10/hr$1.95/hr~$2.21/hr~$3.90/hr
Egress Fees$0$0Yes$0.09/GB
Managed InferenceNoYesYesYes
Spot InstancesNoYesYesYes
Multi-Chip SupportNVIDIA onlyNVIDIA + AMDNVIDIA onlyNVIDIA + Trainium
Pricing Pressure

Lambda's aggressive pricing ($1.10/hr for A100) reflects the commoditization of raw GPU rental. H100 pricing has fallen from $8/hr in early 2023 to under $3/hr across most GPU clouds.[17] This compression makes The platform's strategy of selling managed inference (per-token pricing with higher margins) the right call. Raw compute is a race to the bottom; intelligence is not.

Page 6 of 10

Customer Analysis and Go-to-Market

Customer Segments

SegmentNotable CustomersRelationship Type
HyperscalerMicrosoft[12]Multibillion-dollar infrastructure deal. Lambda hosts NVIDIA GPUs for Microsoft AI workloads.
GPU ManufacturerNVIDIA[11]$1.5B leaseback: NVIDIA leases back 18,000 of its own GPUs from Lambda over 4 years. Also an investor.
AI LabsOpenAI, xAI, Anthropic[12]Cloud compute customers for training and inference workloads.
EnterpriseApple, Amazon, Google, Tencent[2]Hardware sales (Echelon, workstations) and cloud GPU rental.
Defense / GovRaytheon, DoD[2]On-prem Echelon deployments. In-Q-Tel is an investor.[14]
ResearchMIT, Stanford, top universities[2]Academic pricing, Echelon clusters.
AI Startups150,000+ cloud users[12]Self-serve GPU cloud instances.

Go-to-Market Model

  1. Self-Serve Cloud: Sign up online, get GPU instances in minutes. Developer-first experience. This drives the 150K+ user base and creates a funnel for larger deals.[1]
  2. 1-Click Clusters: Mid-market and growth-stage AI companies. 16-512 GPUs with InfiniBand networking. Low reservation minimum.[4]
  3. Superclusters / Private Cloud: Enterprise and hyperscaler deals. Single-tenant, custom configurations. Multi-year contracts (Microsoft, NVIDIA).[3]
  4. Echelon Hardware: On-prem sales to enterprises, universities, and government. Turn-key GPU clusters with premium support.[16]
GTM Insight for the platform

Lambda's funnel is instructive: self-serve creates volume, clusters create stickiness, enterprise deals create revenue. The platform should study this progression. The self-serve tier is the acquisition engine; the managed inference tier (which Lambda abandoned) is where The platform can capture value that Lambda leaves on the table.

Revenue Concentration Risk

Lambda's two largest customers (NVIDIA and Microsoft) likely represent 50%+ of forward revenue. The $1.5B NVIDIA leaseback[11] and multibillion-dollar Microsoft deal[12] dwarf the self-serve cloud revenue. This creates customer concentration risk that IPO investors will scrutinize.

Opportunity

Lambda's customer base includes 150K+ AI developers and 10K+ paying customers who need GPU compute for inference. These customers are currently renting raw GPUs and managing their own inference stacks. The platform's managed inference platform could serve this exact audience with a turnkey solution at lower total cost of ownership.

Page 7 of 10

Data Center and Infrastructure Strategy

Data Center Locations

LocationPartnerCapacityStatusDetails
Kansas City, MO[15]Owned (former bank DC)24MW (to 100MW)Launching Early 2026$500M investment. 10K+ Blackwell Ultra GPUs. Single customer, multi-year.
Dallas-Fort Worth, TX[5]AlignedLiquid-cooledOperationalHigh-density AI racks.
Columbus, OH[5]CologixMulti-rackOperationalHGX B200 clusters. "Days not weeks" deployment.[5]
Chicago, IL[5]EdgeConneX23MW (build-to-density)RFS 2026Single-tenant. Ready for GB300 NVL72.
Atlanta, GA[5]EdgeConneXAir-cooledOperationalTwo sites (ATL02).
San Francisco, CA[5]Cologix (ECL)Multi-rackOperationalLambda HQ region.
Allen, TX[5]CologixMulti-rackOperational
Los Angeles, CA[5]Prime (LAX01)Multi-rackOperational
Additional SitesVarious--Operational15+ total data centers across the US.[5]

Infrastructure Strategy: Leased, Not Owned

Lambda's infrastructure model is fundamentally different from Crusoe (vertically integrated) or hyperscalers (owned campuses). Lambda leases colocation space from third-party data center operators and deploys its own GPU racks within those facilities.[5]

Lambda (Leased Model)

DC OwnershipMostly leased (1 owned in KC)[15]
Energy OwnershipNone
Capex ModelCapital-light (GPU procurement + lease)
Speed to DeployFast (plug into partner DCs)
Margin RiskHigher (pass-through power + colo costs)
Scale Target3 GW, 1M+ GPUs (aspirational)[5]

Platform (Owned Model)

DC OwnershipOwned (modular containers)
Energy OwnershipYes (structural cost advantage)
Capex ModelCapital-intensive but higher margins
Speed to DeployModular (air-cooled containers)
Margin RiskLower (energy costs locked in)
Scale TargetSovereign-ready, multi-chip
Strategic Implication

Lambda's leased model means they face pass-through margin pressure on power and colocation costs. When GPU pricing compresses (as it is doing), Lambda's margins get squeezed from both sides: lower GPU rental prices AND fixed infrastructure costs. The platform's energy ownership provides a structural cost advantage on the largest operating expense line (power). This is The platform's moat.

Page 8 of 10

Competitive Positioning: AI Cloud Landscape

Lambda vs. AI Cloud Peers

MetricLambdaCoreWeaveCrusoeNebius
Valuation$5.9B[6]$65B (public)[18]$10B+[18]$24.3B (public)[18]
2024 Revenue$425M[7]$1.9B[18]~$276M[18]~$240M[18]
Employees~700[8]1,500+[18]1,000+[18]2,000+[18]
Total Funding$2.3B+[6]$12.9B+[18]$3.9B[18]$2.6B+[18]
Managed InferenceNo (Deprecated)[10]YesYes (MemoryAlloy)Yes
Own Data CentersMostly No[5]LimitedYesYes
Own EnergyNoNoYes (45 GW)No
Multi-ChipNVIDIA onlyNVIDIA onlyNVIDIA + AMDNVIDIA only
Egress Fees$0[4]Yes$0Yes
Key Differentiator1-Click Clusters, zero egressNVIDIA early access, K8s-nativeVertical integration, MemoryAlloyYandex heritage, EU presence
Anchor CustomerNVIDIA, Microsoft[11][12]MicrosoftOpenAI/OracleEU enterprises

Market Position Analysis

Lambda occupies a unique position in the GPU cloud landscape: it is the most developer-friendly of the GPU-first clouds (self-serve, zero egress, simple pricing), but it is also the least vertically integrated (no owned DCs, no energy, no managed services). This makes Lambda best-suited for:

  1. AI researchers and startups who need fast, flexible GPU access without enterprise sales cycles
  2. Hyperscalers and GPU manufacturers who need overflow capacity or leaseback arrangements
  3. On-prem enterprises who want turn-key GPU hardware (Echelon) with support

Lambda is least suited for:

  1. Customers who need managed inference APIs (Lambda exited this market)
  2. Sovereign or compliance-heavy deployments (no security/compliance team visible)
  3. Customers who need multi-chip flexibility (NVIDIA-only)
Cluster Rating

SemiAnalysis ClusterMAX 2.0 ratings place Lambda in the Silver tier, behind CoreWeave (Platinum) and Crusoe (Gold).[19] The rating reflects Lambda's weaker networking, storage, and orchestration capabilities compared to peers who have invested more heavily in managed platform services.

Page 9 of 10

Strategic Strategic Implications

Threat Assessment: LOW

Lambda is not a direct threat to The platform's inference-as-a-service strategy for three clear reasons:

#ReasonEvidence
1Lambda exited managed inferenceDeprecated Inference API and Lambda Chat in Sep 2025.[10] No product team building inference services.
2Lambda sells raw compute, not intelligenceTheir entire product is GPU-hours. The platform sells tokens. Different customers, different value prop.
3Lambda is NVIDIA-only, not multi-chipCannot offer workload-optimal routing across GPU vendors. A multi-chip architecture is a genuine differentiator.

Partnership Opportunity: MEDIUM-HIGH

#OpportunityDetails
1GPU supply partnershipLambda has 15+ US data centers with NVIDIA GPUs. the platform could lease GPU capacity from Lambda for burst inference workloads while building out its own infrastructure.
2Managed inference overlayLambda's 150K+ users need inference but Lambda stopped offering it. the platform could offer a managed inference layer on Lambda GPUs.
3Echelon + the platform inference stackLambda Echelon serves on-prem enterprise and government customers. The platform's inference software could run on Echelon hardware.

Risks to Monitor

#RiskTrigger to WatchLikelihood
1Lambda re-enters inferenceNew inference API announcement, hiring of inference engineering team, or product leadership (CPO/SVP Product)Medium
2Post-IPO capital deploymentLambda IPOs in H2 2026 and uses proceeds to build managed services layer[9]Medium
3Lambda acquires inference companyAcquisition of a Together AI, Fireworks, or similar inference platformLow
4Lambda builds sovereign offeringIn-Q-Tel investment[14] suggests government interest; could lead to classified/sovereign cloudLow

Recommended Actions

1. Explore GPU Supply Partnership

Lambda has excess GPU capacity across 15+ US data centers. the platform could lease burst capacity from Lambda while building its own infrastructure. Lambda's zero egress policy makes this operationally simple.

2. Target Lambda's Inference Gap

Lambda's 150K+ users lost their inference API in Sep 2025. The platform's the inference platform can serve this audience with managed inference at lower TCO than self-managed GPU rental.

3. Monitor IPO and Post-IPO Strategy

Lambda's IPO (H2 2026) will generate significant capital. Watch for inference product announcements, product leadership hires, or acquisitions that signal re-entry into managed services.

4. Match Zero Egress Standard

Both Lambda and Crusoe offer zero egress fees. This is becoming the market standard for GPU-first clouds. the platform must match this or clearly justify any data transfer charges.

Page 10 of 10

Appendix

A. Lambda Echelon On-Prem Product[16]

FeatureDetails
ScaleSingle rack (40 GPUs) to data center scale (1,000s GPUs)[16]
Compute NodesLambda Hyperplane servers: 4, 8, or 16 NVIDIA GPUs per node with NVLink[16]
NetworkingInfiniBand HDR 200 Gb/s or 100 Gb/s Ethernet[16]
StorageProprietary and open-source options[16]
SoftwareLambda Stack: PyTorch, TensorFlow, CUDA, cuDNN (preloaded, regularly updated)[16]
SupportPremium/Max tiers with direct phone access to AI infrastructure engineers[16]
CustomersFortune 500, top universities, DoD[16]

B. Organizational Structure (Inferred)

Stephen Balaban[2]
Co-Founder & CEO
Michael Balaban[2]
Co-Founder & CTO
Peter Seibold[8]
CFO
Ariel Nissan[8]
General Counsel
David Hall[8]
VP, NVIDIA Solutions
Robert Brooks IV[8]
VP, Revenue
Paul Miltenberger[8]
VP, Finance
Organizational Signal

Lambda's executive team has no product management leadership, no Head of Inference, no Chief Product Officer, and no DevRel leader. The VP-level roles are focused on finance, NVIDIA relationships, and sales. This organizational structure confirms that Lambda is an infrastructure and sales organization, not a product-led platform company. For the platform, this means Lambda is unlikely to build competitive managed services in the near term.

C. Key Investor Map

InvestorTypeRound(s)Significance
TWG Global (Thomas Tull, Mark Walter)[6]Mega-fund ($40B AUM)Series E Lead$1.5B single check. Largest AI infrastructure bet.
NVIDIA[14]StrategicSeries D + LeasebackInvestor AND $1.5B customer. Dual relationship.
US Innovative Technology Fund[13]Thomas Tull vehicleSeries C, ERepeat lead investor across rounds.
In-Q-Tel (CIA)[14]GovernmentSeries DUS intelligence community interest in Lambda infrastructure.
ARK Invest[14]Public market crossoverSeries DCathie Wood's fund. Pre-IPO positioning.
Mubadala Capital[9]Sovereign wealth (Abu Dhabi)Pre-IPO (in talks)Potential $350M convertible note lead.
T. Rowe Price, SK Telecom[13]Institutional + StrategicSeries CBlue-chip institutional validation.

D. Methodology

This report was compiled from 24 primary sources including Lambda's corporate website, product documentation, press releases, SEC-related filings, investor announcements, third-party research (Contrary Research, Sacra, SemiAnalysis), and industry publications (TechCrunch, CNBC, Tom's Hardware, Data Center Dynamics, BusinessWire). Revenue figures are sourced from Sacra Research and Contrary Research estimates. Organizational structure is inferred from public executive profiles. All data accessed February 14-16, 2026.

Sources & Footnotes

  1. [1] Lambda Homepage, "The Superintelligence Cloud," GPU cloud instances, 1-Click Clusters, Private Cloud, lambda.ai
  2. [2] Contrary Research, "Lambda Business Breakdown & Founding Story," Stephen/Michael Balaban, founding history, face recognition pivot, customer list, hardware products, research.contrary.com/company/lambda
  3. [3] Lambda Superclusters Product Page, up to 165K GPUs, single-tenant, liquid-cooled, Quantum InfiniBand, lambda.ai/superclusters
  4. [4] Lambda 1-Click Clusters Documentation, 16-512 GPUs, Quantum-2 InfiniBand 400 Gb/s, 3,200 Gb/s RDMA, zero egress, $4.49/GPU/hr, management nodes, SHARP acceleration, lambda.ai/1-click-clusters
  5. [5] Multiple sources on Lambda data center partnerships: EdgeConneX (Chicago 23MW + Atlanta), Cologix (SF, Allen TX, Columbus OH), Aligned (DFW), Prime (LAX01), 15+ US data centers, lambda.ai/blog/lambda-edgeconnex-dc-pr; cologix.com; aligneddc.com
  6. [6] Lambda Series E announcement: $1.5B+ from TWG Global, $5.9B valuation, 10,000+ customers, total funding $2.3B+, lambda.ai/blog/lambda-raises-over-1.5b; businesswire.com
  7. [7] Sacra Research, Lambda Labs revenue: $250M (2023), $425M (2024), $505M ARR (May 2025), sacra.com/c/lambda-labs
  8. [8] Lambda executive profiles: Clay, Craft.co, Exa, LinkedIn. ~700 employees (2026), Peter Seibold CFO, David Hall VP NVIDIA Solutions, Robert Brooks IV VP Revenue, Paul Miltenberger VP Finance, Ariel Nissan GC, clay.com/dossier/lambda-executives
  9. [9] Lambda IPO plans: Morgan Stanley, JPMorgan, Citi hired for H2 2026 IPO. Pre-IPO $350M round in discussions with Mubadala Capital, datacenterdynamics.com; sacra.com/research/lambda-ipo
  10. [10] Lambda Inference API and Lambda Chat deprecated September 2025, users directed to GPU cloud instances, docs.lambda.ai/public-cloud/lambda-chat; deeptalk.lambdalabs.com
  11. [11] NVIDIA $1.5B leaseback deal: 18,000 GPUs over 4 years ($1.3B for 10K GPUs + $200M for 8K additional), makes NVIDIA Lambda's largest customer, tomshardware.com; hostingjournalist.com
  12. [12] Lambda-Microsoft multibillion-dollar agreement: tens of thousands of NVIDIA GPUs including GB300 NVL72 in Lambda liquid-cooled US data centers, cnbc.com; lambda.ai/blog
  13. [13] Lambda funding history: Seed ~$4M (2015-2018, Gradient Ventures, 1517 Fund, Bloomberg Beta), Series A $15M (Jul 2021), Series B $44M (Mar 2023, Mercato Partners), Series C $320M at $1.5B (Feb 2024, USIT/Thomas Tull, B Capital, SK Telecom, T. Rowe Price), research.contrary.com/company/lambda; lambda.ai/blog/lambda-raises-320m
  14. [14] Lambda Series D: $480M at $4B+ valuation, co-led by Andra Capital and SGW. Investors: NVIDIA, ARK Invest, In-Q-Tel (IQT), Andrej Karpathy, G Squared, KHK & Partners, Fincadia Advisors, lambda.ai/blog/lambda-raises-480m; techfundingnews.com
  15. [15] Lambda Kansas City AI Factory: $500M investment, 24MW initial (scalable to 100MW+), 10,000+ Blackwell Ultra GPUs, former bank data center, single customer multi-year deal, early 2026 launch, lambda.ai/blog/lambda-to-build-a-100mw-ai-factory-in-kansas-city-mo; kctv5.com
  16. [16] Lambda Echelon on-prem GPU cluster: 40 to 1,000s GPUs, Hyperplane servers (4/8/16 GPU), InfiniBand HDR 200 Gb/s, NVLink, Lambda Stack, Premium/Max support, Fortune 500, universities, DoD customers, lambda.ai/blog/lambda-echelon
  17. [17] Lambda Cloud Pricing Page, B200 $4.99/hr, A100 ~$1.10/hr, market pricing comparisons, lambda.ai/pricing; computeprices.com/providers/lambda; intuitionlabs.ai
  18. [18] AI Cloud competitive landscape: CoreWeave $65B, Nebius $24.3B, Crusoe $10B+, employee counts, revenue comparisons, market projections $180B by 2030 at 69% CAGR, livedocs.com; sacra.com/research/gpu-clouds-growing
  19. [19] SemiAnalysis ClusterMAX 2.0 GPU Cloud Rating System, Lambda Silver tier, CoreWeave Platinum, Crusoe Gold, newsletter.semianalysis.com
  20. [20] James Fahey, "Lambda's $1.5B Raise and the Rise of the Superintelligence Cloud," comprehensive business analysis, medium.com/@fahey_james
  21. [21] TechCrunch, "AI data center provider Lambda raises whopping $1.5B after multibillion-dollar Microsoft deal," techcrunch.com
  22. [22] Yahoo Finance, "Microsoft and Nvidia Anchor Lambda's AI Cloud Growth Ahead of IPO," finance.yahoo.com
  23. [23] StockAnalysis, "How to Invest in Lambda Labs Stock in 2026," IPO timeline, financial overview, stockanalysis.com
  24. [24] Data Center Dynamics, multiple Lambda articles: Kansas City facility, funding rounds, IPO preparations, datacenterdynamics.com

E. Methodology

This report was compiled from 24 primary sources including Lambda's corporate website, product documentation, press releases, third-party research (Contrary Research, Sacra, SemiAnalysis), investor announcements, and industry publications (CNBC, TechCrunch, Tom's Hardware, Data Center Dynamics, BusinessWire). Revenue projections are sourced from Sacra Research. Organizational structure is inferred from public executive profiles (Clay, Craft.co, LinkedIn). All performance claims are from Lambda's own documentation unless otherwise noted. Report accessed and compiled February 14-16, 2026.