Competitive Intelligence Report

Crusoe IaaS Strategy Analysis

How an energy-first AI cloud is building a full AI infrastructure platform — and what it means for the platform

February 16, 2026 Analyst: MinjAI Agents For: AI Infrastructure Strategy & Product Leaders
72 Footnoted Sources
Page 1 of 10

Executive Summary

Crusoe is a vertically integrated "AI cloud"[1] that owns energy assets, builds data centers, manufactures its own equipment,[2] and sells GPU compute and managed AI inference as cloud services.[3] Founded in 2018 as a Bitcoin mining operation using stranded natural gas,[4] the company completed its full pivot to AI infrastructure by March 2025 when it divested its entire Bitcoin division to NYDIG.[5]

$10B+[6]
Valuation (Oct 2025)
$3.9B[5]
Total Funding Raised
~$1B[7]
2025 Revenue (Projected)
1,000+[8]
Employees (Dec 2025)
3.4 GW[6]
DC Capacity Online
45+ GW[6]
Energy Pipeline
100+[9]
Open Positions
262%[7]
YoY Revenue Growth
Strategic Implications

Crusoe is 18-24 months ahead of the platform on cloud platform and managed inference. They have shipped a full IaaS product suite,[3] built proprietary inference technology (MemoryAlloy),[10] and secured a $12B OpenAI data center contract.[5] Their hiring reveals they are now building enterprise storage,[11] security/compliance certifications,[12] and pricing optimization.[13] The platform should treat this as the primary competitive benchmark for its IaaS strategy.

Five Things Action Items

  1. Accelerate managed inference launch. Crusoe proved the market.[14] A multi-chip architecture is a genuine differentiator. Ship it.
  2. Study MemoryAlloy architecture. Their distributed KV-cache achieves 9.9x TTFT improvement.[10] Evaluate build vs. partner.
  3. Lead with compliance. Crusoe is hiring their first security PM now.[12] The platform can get ahead on SOC 2, HIPAA, FedRAMP.
  4. Build the product team. Crusoe has an SVP of Product,[15] multiple GPMs, Staff PMs.[9] The platform needs equivalent leadership.
  5. Consider a clean break from BTC positioning. Crusoe's valuation went from $2.8B to $10B+ in 7 months after divesting Bitcoin.[6][5]
Page 2 of 10

Company Overview and Evolution

Leadership Team

NameTitleBackground
Chase LochmillerCEO, Co-Founder, Chairman[15]MIT (math/physics), Stanford (CS/AI), Jump Trading[4]
Cully CavnessPresident, CSO, Co-Founder[15]Middlebury, Oxford MBA, energy investment banking[4]
Michael GordonCOO & CFO[15]Led MongoDB's 2017 IPO[16]
Nitin PerumbetiCTO[15]Technology leadership
Erwan MenardSVP, Product Management[15]Product strategy leadership
Nadav EironSVP, Cloud Engineering[15]Cloud platform engineering[17]
Chris DolanChief Data Center Officer[15]DC operations
Nick SammutSVP, Strategic Finance & Corp Dev[15]Capital formation, M&A

Timeline: From Bitcoin Mining to AI Cloud

2018
Founded in Denver.[4] Deployed modular data centers at oil field sites for Bitcoin mining using flared natural gas.[18]
2019-2022
Scaled Digital Flare Mitigation (DFM) business. 400+ modular units deployed globally.[5] Acquired Easter-Owens (electrical manufacturing).[5]
2023
Strategic pivot to AI infrastructure. Secured $200M GPU procurement loan.[5] Began building cloud platform.
Mar 2025
Raised $600M Series D at $2.8B.[5] Divested entire Bitcoin/DFM division to NYDIG.[5] Launched Crusoe Cloud, Managed Inference, AutoClusters at NVIDIA GTC.[17]
Jun 2025
Launched Crusoe Spark modular AI data center product for edge deployments.[19]
Sep 2025
Abilene campus Phase 1 live (1.2 GW, first two buildings, 980K sq ft).[5] Built for OpenAI ($12B project).[5]
Oct 2025
Raised $1.375B Series E at $10B+ valuation.[6] 137 investors including NVIDIA, Founders Fund, Fidelity, Mubadala.[6]
Feb 2026
Energy Vault partnership for Spark deployment.[20] Starcloud partnership for space-based data center (launch late 2026).[21]

Funding History

RoundDateAmountValuationLead Investors
Series A[5]2019$70M--Valor Equity
Series B[5]2021$350M--G2 Venture Partners
Series C[5]2022$505M$2B+--
GPU Loan[5]Late 2023$200M----
Series D[5]Mar 2025$600M$2.8BFounders Fund
Series E[6]Oct 2025$1.375B$10B+Mubadala, Valor Equity
Total[5]~$3.9B
Page 3 of 10

Product Architecture and Technical Stack

Crusoe has built a complete IaaS and PaaS offering.[3] Below is the full product stack from managed services down to physical infrastructure.

Layer 4: Managed AI Services[14]
Managed Inference (MemoryAlloy engine)[10]
Intelligence Foundry (Model catalog + API portal)[22]
Provisioned Throughput[23]
Batch API Coming Soon[22]
Layer 3: Platform Services[17]
Managed Kubernetes (CMK)[24]
AutoClusters (Fault-tolerant orchestration)[17]
Slurm Orchestration[17]
Container Registry[3]
NVIDIA Run:ai Integration[24]
Cluster Observability (NVIDIA DCGM)[17]
Console, CLI, APIs, Terraform, SDKs[3]
Layer 2: Core IaaS (Compute, Storage, Networking)[23]
GPU Instances (NVIDIA GB200, B200, H200, H100, A100, L40S, A40)[23]
GPU Instances (AMD MI355X, MI300X)[23]
CPU Instances (General + Storage-optimized)[23]
Block Storage (Persistent Disks)[23] Building[11]
File Storage (Shared Disks)[23] Building[11]
Object Storage Building[11]
VPC Networking[3]
RDMA Networking[3]
Global Backbone (NA + Europe)[3]
Topology-Aware GPU Placement[3]
Layer 1: Physical Infrastructure
Hyperscale Campuses (Abilene 1.2GW[6], Wyoming 1.8GW[6])
Crusoe Spark (Modular AI Factory, 400+ units)[19]
In-House Manufacturing (Easter-Owens)[5]
Power: Gas, Solar, Wind, Hydro, Geothermal[5]
Norway, Iceland DCs[5]

GPU Pricing (On-Demand)[23]

GPUMemoryOn-DemandSpotNotes
NVIDIA GB200 NVL72186 GBContact SalesContact SalesLatest generation
NVIDIA B200 HGX180 GBContact SalesContact SalesBlackwell
NVIDIA H200 HGX141 GB$4.29/hrContact Sales
NVIDIA H100 HGX80 GB$3.90/hr$1.60/hr59% spot discount
AMD MI300X192 GB$3.45/hr$0.95/hr72% spot discount
NVIDIA A100 SXM80 GB$1.95/hr$1.30/hr
AMD MI355X288 GBContact SalesContact SalesComing Fall 2025
Key Pricing Differentiators
  • No data transfer charges (ingress or egress)[23] — major advantage vs. hyperscalers
  • Per-minute billing, no upfront setup fees[23]
  • Spot instances up to 90% off hyperscaler on-demand pricing[25]
  • 99.98% uptime with automatic node swapping[3]
Page 4 of 10

Managed Inference Deep Dive

Crusoe Managed Inference is a fully managed, API-driven inference service.[14] Customers call an OpenAI-compatible API endpoint. No infrastructure management required. The key technical differentiator is MemoryAlloy, their proprietary distributed KV-cache fabric.[10]

How MemoryAlloy Works

Architecture Overview

MemoryAlloy decouples KV-cache data from individual GPU processes and exposes them as shared cluster resources.[10] Each node runs a Unified Memory service connected via peer-to-peer discovery, forming a full mesh network.[10] Written in Rust with Python bindings and custom CUDA/ROCm kernels.[10]

Core Technical Components

  1. Cluster-Wide Cache: Instead of each GPU maintaining isolated KV cache, MemoryAlloy creates a shared memory pool across all cluster nodes. An 8-node H100 cluster provides 6-1.4 TB unified KV storage vs. 640 GB-1.4 GB isolated per node.[10]
  2. Multi-Rail Data Movement: Distributes transfers across PCIe lanes, NVLink, and network adapters in parallel. Achieves 80-130 GB/s per GPU (vs. ~46 GB/s single link). Aggregate: 250+ GB/s for 8-GPU transfers.[10]
  3. KV-Aware Gateway: Routes requests to the node that already has relevant prefix cache data. Estimates prefill cost per request and picks the engine that delivers earliest first-token.[10]
  4. Shadow Pools & Send Graph: Pre-allocated GPU memory staging. DAG-based pipelined data movement. Eliminates NIC registration overhead.[10]

Performance Claims (Self-Reported)[10][22]

MetricImprovementBenchmark Context
Time-to-First-Token (TTFT)9.9x faster vs. vLLM[22]Llama-3.3-70B, multi-node
Throughput (tokens/sec)5x higher[14]Production workloads
Local Cache Hit TTFT38x faster[10]110K-token prompts
Remote Cache Hit TTFT34x faster[10]Near-local performance
Chat Session TTFTSub-150ms[10]4-node, Llama-3.3-70B
Multi-Node ScalingNear-linear[10]Validated 1-8 nodes

Supported Models and Pricing[22][23]

ModelInput ($/1M tokens)Output ($/1M tokens)CachedMax Context
Llama 3.3 70B Instruct$0.25$0.75$0.13131K
DeepSeek V3 0324$0.50$1.50$0.25164K
DeepSeek R1 0528$1.35$5.40$0.68164K
Qwen3 235B A22B$0.22$0.80$0.11262K
Kimi-K2 Thinking$0.60$2.50$0.30131K
GPT-OSS 120B$0.15$0.60$0.08131K
Gemma 3 12B$0.08$0.30$0.04131K
Page 5 of 10

Crusoe Spark: The Edge and Modular Play

Crusoe Spark is a turnkey, prefabricated modular AI data center.[19] Self-contained: power, cooling, fire suppression, monitoring, GPU racks.[19] Delivered in as fast as 3 months.[19]

Hyperscale Campus

Scale1+ GW[6]
Timeline12-24 months[5]
Use CaseTraining, large-scale inference
ExamplesAbilene (1.2 GW)[6], Wyoming (1.8 GW)[6]

Crusoe Spark (Modular)

Scale1-25 MW per site[20]
Timeline3 months[19]
Use CaseEdge inference, capacity expansion
Deployed400+ units globally[19]

Target Markets[19]

Recent Partnerships

PartnerDateDetails
Energy VaultFeb 2026Framework agreement for phased Spark deployment in Snyder, TX. Scalable to 25 MW.[20]
Redwood Materials2025Joint solar/battery-powered Spark deployment[5]
StarcloudFeb 2026Crusoe Cloud on satellite. Launch late 2026. First cloud operator in space.[21]
Tallgrass20251.8 GW campus in Wyoming, scalable to 10 GW[5]
Strategic Relevance

This is almost exactly what The platform's modular container infrastructure could deliver. Crusoe has a head start with 400+ deployed units,[19] but The platform's modular infrastructure approach is architecturally similar. The key difference: Crusoe has already wrapped theirs in a cloud platform and managed inference service.[3]

Page 6 of 10

Hiring Analysis: Size, Shape, and Signals

1,000+[8]
FTE (Dec 2025)
5,000+[5]
Contractors (Abilene)
100+[9]
Open Roles
~73%[26]
YoY Headcount Growth

Open Positions by Department[9]

DepartmentEst. Open RolesSignal
Digital Infrastructure (Construction/Ops)30-40Massive physical buildout continues
Cloud Engineering10-15Platform scaling
Product & Design8-10Product expansion phase
Strategic Finance & Corp Dev7+IPO Prep[16]
Manufacturing5-8In-house hardware production
Procurement & Sourcing5+Supply chain scaling
IT, Compliance, Security3-5Enterprise readiness
Marketing & GTM3-5Customer acquisition ramp
Power Infrastructure3-5Energy portfolio expansion

Product & Design Open Roles (Detailed)

RoleLocationSalaryWhat It Tells Us
Staff PM, Managed Inference[27] SF / NYC $204K-$247K + RSU[27] Inference-as-a-Service is the flagship product. Senior IC owning full lifecycle.
Group PM, Storage (x2)[11][28] SF + Denver $206K-$282K + RSU[11][28] Building Block, File, Object storage. Two GPM hires = highest priority.
Group PM, Security & Compliance[12] SF $237K-$288K + RSU[12] First dedicated security PM.[12] SOC 2 Type II, ISO 27001, HIPAA, FedRAMP roadmap.[12]
PM, Pricing / Cloud Economics (x2)[13][29] SF + Denver $150K-$209K + RSU[13][29] Pricing engine, margin optimization, deal desk tooling.[13]
Senior DevRel Manager[30] SF $160K-$190K + RSU[30] Developer community (PyTorch, TensorFlow, JAX).[30] Developer-first GTM.
Five Signals from Hiring Patterns
  1. Storage is the next major product area. Two GPM hires for Block/File/Object[11] = building AWS EBS/S3/EFS equivalent.
  2. Security/Compliance is gating enterprise deals. First dedicated PM.[12] Need SOC 2, HIPAA, FedRAMP.[12]
  3. Pricing is a strategic weapon. Two dedicated PMs building sophisticated models.[13]
  4. Inference is the flagship. Staff PM at $204K-$247K.[27] Not a side project.
  5. DevRel signals developer-first GTM. Winning open-source ML community is the strategy.[30]
Page 7 of 10

Organizational Structure and Product Org Map

Based on leadership team data[15] and job descriptions,[9] here is the inferred organizational structure. Green-dashed boxes indicate open roles currently being hired.

Chase Lochmiller[15]
CEO & Co-Founder
Cully Cavness[15]
President & CSO
Michael Gordon[15]
COO & CFO
Nitin Perumbeti[15]
CTO
Chris Dolan[15]
Chief DC Officer
Jamey Seely[15]
CLO
Erwan Menard[15]
SVP, Product Management
Nadav Eiron[15]
SVP, Cloud Engineering
Nick Sammut[15]
SVP, Strategic Finance
Jamie McGrath[15]
SVP, DC Operations
John Adams[15]
SVP, Power Infrastructure
PRODUCT & DESIGN TEAM (REPORTING TO ERWAN MENARD, SVP PRODUCT)[15]
Open Role[27]
Staff PM, Managed Inference
$204K-$247K
Open Role (x2)[11][28]
Group PM, Storage
$206K-$282K
Open Role[12]
Group PM, Security & Compliance
$237K-$288K
Open Role (x2)[13][29]
PM, Pricing & Cloud Economics
$150K-$209K
Open Role[30]
Sr DevRel Manager
$160K-$190K
CLOUD ENGINEERING (REPORTING TO NADAV EIRON, SVP CLOUD ENG)[15]
Kyle Sosnowski[15]
VP Engineering, Cloud
Tamanna Sait[15]
VP Engineering, Cloud
Omer Landau[15]
VP Engineering
Jay Maloney[15]
VP Sales, Cloud
Page 8 of 10

Competitive Positioning: AI Cloud Landscape

AI Clouds are purpose-built AI cloud providers competing with hyperscalers on price and GPU specialization.[31] The market is projected to hit $180B by 2030 at 69% CAGR.[32]

Crusoe vs. AI Cloud Peers

MetricCoreWeaveCrusoeLambda LabsNebius
H1 2025 Revenue$2.1B[32]~$500M (est.)[7]$250M+[32]$156M[32]
Valuation$65B (public)[31]$10B+[6]$2.5B[31]$24.3B (public)[31]
Employees1,500+[31]1,000+[8]500+[31]2,000+[31]
Key DifferentiatorNVIDIA early access[31]Vertical integration[5]1-Click Clusters[31]Yandex heritage[31]
Managed InferenceYesYes (MemoryAlloy)[10]NoYes
Own Data CentersLimitedYes[5]NoYes
Own EnergyNoYes (45 GW)[6]NoNo
ManufacturingNoYes[5]NoNo
Anchor CustomerMicrosoftOpenAI/Oracle[5]AI startupsEU enterprises

Crusoe's Unique Position

Crusoe is the only AI cloud that is fully vertically integrated from energy production through managed AI services:[5]

  1. Structural cost advantage: Owns the power (a significant portion of inference cost)[5]
  2. Speed: In-house manufacturing cuts vendor lead times from 100 weeks to 22 weeks[5]
  3. Edge capability: Crusoe Spark enables rapid distributed deployments[19]
  4. Sustainability: Clean energy positioning wins ESG-conscious enterprise buyers[18]

Crusoe vs. Inference Platform: Head-to-Head

Crusoe

OriginBTC mining (stranded gas)[4]
AI Pivot2023 (full exit Mar 2025)[5]
Cloud PlatformLive[3]
Managed InferenceLive (MemoryAlloy)[14]
Chip PartnersNVIDIA (Preferred[5]), AMD[23]
DC Scale3.4 GW, 9.8M sq ft[5]
Revenue~$1B (2025)[7]
Product TeamSVP + 8+ PMs hiring[15][9]

Platform

OriginBTC mining
AI Pivot2024-2025 (in progress)
Cloud PlatformIn Development
Managed InferenceIn Development
Chip PartnersMultiple GPU/accelerator vendors
DC ScaleSmaller footprint
RevenuePrimarily BTC mining
Product TeamBuilding
The platform's Potential Advantages
  • Multi-chip architecture with multiple GPU/accelerator vendors enables workload-optimal routing. Crusoe is NVIDIA + AMD only.[23]
  • Dedicated/sovereign environments for compliance-heavy verticals. Crusoe is just starting to hire for security/compliance.[12]
  • Cost discipline: The platform's energy ownership enables structurally lower cost of compute. Crusoe has ~$300M/year interest expense.[5]
Page 9 of 10

Strategic Strategic Implications

What Crusoe Got Right (Lessons)

#DecisionImpact
1Divested Bitcoin completely (Mar 2025)[5]Clear signal to investors, customers, talent. Valuation: $2.8B[5] to $10B+[6] in 7 months.
2Built managed services, not just raw compute[17]Higher margins, stickier customers. Managed Inference[14] + AutoClusters[17] + Managed K8s.[24]
3Invested in proprietary technology (MemoryAlloy)[10]9.9x TTFT improvement.[22] Real engineering moat. Rust-based, custom CUDA kernels.[10]
4Hired product leadership early[15]SVP Product,[15] SVP Cloud Eng,[15] multiple GPMs/Staff PMs[9] before shipping.
5Leveraged existing hardware capability[5]Easter-Owens acquisition: cut vendor lead times from 100 weeks to 22 weeks.[5]

Crusoe's Vulnerabilities (Opportunities for the platform)

#VulnerabilityOpportunity
1Single-vendor GPU dependency (NVIDIA + AMD only)[23]A multi-chip architecture offers workload-optimal routing
2Heavy debt load (~$300M annual interest)[5]The platform can target sustainable margins from day one
3GPU pricing compression ($8/hr to $2/hr historically)[5]Multi-chip flexibility hedges against single-vendor price erosion
4No sovereign/dedicated focus (hiring first security PM now)[12]The platform can lead on compliance-ready, physically isolated inference
5Not yet profitable (rapid growth + massive capex)[5]The platform's energy-first cost structure is more sustainable

Recommended Actions

1. Accelerate Managed Inference

Crusoe proved the market.[14] A multi-chip architecture is a genuine differentiator. Ship the product.

2. Study MemoryAlloy Architecture

Their distributed KV-cache is the right technical direction.[10] Evaluate build vs. partner for The platform's equivalent.

3. Lead with Compliance

Crusoe is hiring their first security PM now.[12] The platform can get ahead by shipping SOC 2, HIPAA-ready inference first.

4. Build the Product Team

Crusoe has SVP Product,[15] GPMs,[11] Staff PMs,[27] pricing PMs.[13] The platform needs equivalent leadership to compete.

Page 10 of 10

Appendix

A. Data Center Locations

LocationCapacityPower SourceStatus
Abilene, TX[6]1.2 GWGrid + renewablesPhase 1 Live
Wyoming (Tallgrass)[5]1.8 GW (to 10 GW)Grid + renewablesUnder Construction
Snyder, TX (Energy Vault)[20]25 MW initialSpark modularDeploying 2026
Norway[5]12 MW (to 52 MW)HydroelectricOperational
Iceland (ICE02)[5]ExpansionGeothermal + hydroExpanding
Satellite (Starcloud)[21]Limited GPUSolarLaunch Late 2026

B. Key Customers[5]

SegmentCustomerRelationship
HyperscalerOpenAI / Oracle (Stargate)[5]$12B campus build + operations
AI StartupAnysphere / Cursor[5]Cloud compute customer
AI StartupTogether AI[5]Cloud compute customer
AI StartupWindsurf[5]Cloud compute customer
AI StartupDecart[5]Exclusive model partner
EnterpriseSony[5]Cloud compute customer
EnterpriseDatabricks[5]Cloud compute customer
ResearchMIT[5]Academic partnership
ValidationMeta (PyTorch team)[17]"1600 GPUs via Slurm just worked"

Sources & Footnotes

  1. [1] Data Center Frontier, "The Evolution of the AI Cloud," datacenterfrontier.com
  2. [2] Contrary Research, "Crusoe Business Breakdown & Founding Story," Easter-Owens acquisition details, research.contrary.com/company/crusoe
  3. [3] Crusoe Cloud Platform Page, product features, VPC, RDMA, SDKs, uptime claims, crusoe.ai/cloud
  4. [4] Contrary Research, founding story: Chase Lochmiller (MIT, Stanford, Jump Trading), Cully Cavness (Middlebury, Oxford), Denver 2018, research.contrary.com/company/crusoe
  5. [5] Contrary Research, comprehensive financials: $3.9B total funding, NYDIG divestiture, 400+ modular units, Easter-Owens acquisition (100-week to 22-week lead times), $200M GPU loan, Series D $600M at $2.8B, 9.8M sq ft, 946K GPU capacity, $300M annual interest, customer list, data center locations, energy pipeline, Stargate/OpenAI contract, research.contrary.com/company/crusoe
  6. [6] Crusoe Series E Announcement: $1.375B at $10B+ valuation, 137 investors (NVIDIA, Founders Fund, Fidelity, Mubadala, Salesforce Ventures), 45+ GW pipeline, 3.4 GW capacity, Abilene 1.2 GW, Wyoming 1.8 GW, bookings 5x growth, crusoe.ai/resources/newsroom/crusoe-announces-series-e-funding
  7. [7] Sacra Research, Crusoe revenue projections: $276M (2024) to $998M (2025), 262% YoY growth, sacra.com/c/crusoe
  8. [8] TipRanks, "Crusoe Marks 1,000 Employees and AI Inference Launch," Dec 2025, tipranks.com
  9. [9] Crusoe Careers Page (Ashby), 100+ open positions across all departments, jobs.ashbyhq.com/Crusoe
  10. [10] Crusoe Engineering Blog, "MemoryAlloy: Reinventing KV Caching for Cluster-Scale Inference," all technical architecture details: Rust implementation, CUDA/ROCm kernels, full mesh network, Shadow Pools, Send Graph, 80-130 GB/s per GPU, 250+ GB/s aggregate, 38x/34x TTFT improvements, near-linear scaling, crusoe.ai/resources/blog/crusoe-memoryalloy
  11. [11] Crusoe Job Posting: Group PM, Storage (SF/Sunnyvale), Block/File/Object storage, $233K-$282K, jobs.ashbyhq.com/Crusoe/d6a78556
  12. [12] Crusoe Job Posting: Group PM, Security & Compliance, $237K-$288K, "inaugural dedicated PM in this domain," SOC 2 Type II, ISO 27001, HIPAA, FedRAMP roadmap, jobs.ashbyhq.com/Crusoe/2671fc66
  13. [13] Crusoe Job Posting: PM, Pricing/Cloud Economics/Product Strategy (SF/NYC), $172K-$209K, jobs.ashbyhq.com/Crusoe/e65694db
  14. [14] Crusoe Newsroom, "Crusoe Launches Managed Inference, Delivering Breakthrough Speed for Production AI," 5x tokens/sec, crusoe.ai/resources/newsroom/crusoe-launches-managed-inference
  15. [15] Crusoe Leadership Page, all executive titles and org structure, crusoe.ai/about/leadership
  16. [16] TSG Invest, Crusoe Energy Private Investment Guide: Michael Gordon led MongoDB's 2017 IPO, appointment signals IPO preparation, tsginvest.com/crusoe-energy
  17. [17] Crusoe Newsroom, "Crusoe Cloud Announces New AI Platform Services," Managed Inference, AutoClusters, Slurm, NVIDIA DCGM, Meta/PyTorch validation ("1600 GPUs just worked"), Q2 2025 preview at NVIDIA GTC, crusoe.ai/resources/newsroom/crusoe-cloud-announces-new-ai-platform-services
  18. [18] McKinsey, "How Crusoe Powers and Transforms AI with Stranded Energy," mckinsey.com
  19. [19] Crusoe Newsroom, "Crusoe Introduces Crusoe Spark: Modular AI Data Centers for Scalable Edge Computing," 400+ units deployed, 3-month delivery, turnkey specs, target markets, crusoe.ai/resources/newsroom/crusoe-introduces-crusoe-spark
  20. [20] BusinessWire, "Energy Vault and Crusoe Announce Strategic Framework Agreement for Deployment of Crusoe Spark Modular AI Factory Units," scalable to 25 MW, Snyder TX, 2026 deployments, businesswire.com
  21. [21] Crusoe Newsroom, "Crusoe to Become First Cloud Operator in Space Through Strategic Partnership with Starcloud," late 2026 launch, crusoe.ai/resources/newsroom/crusoe-starcloud
  22. [22] Crusoe Managed Inference Product Page, 9.9x TTFT claim, supported models, pricing table, Intelligence Foundry, Batch API "coming soon," crusoe.ai/cloud/managed-inference
  23. [23] Crusoe Cloud Pricing Page, all GPU pricing (on-demand, spot), CPU pricing, storage pricing, managed inference token pricing, no data transfer charges, per-minute billing, crusoe.ai/cloud/pricing
  24. [24] Crusoe Blog, "Crusoe Managed Kubernetes (CMK) Now a Partner-Certified Distribution for NVIDIA Run:ai," crusoe.ai/resources/blog/crusoe-managed-kubernetes-nvidia-run-ai
  25. [25] Crusoe Blog, "Crusoe Cloud Now Offers Spot Pricing: Access Powerful GPUs Up to 90% Off Hyperscaler On-Demand Prices," crusoe.ai/resources/blog/crusoe-cloud-now-offers-spot-pricing
  26. [26] Growjo, "Crusoe Energy Systems: Revenue, Competitors, Alternatives," 73% employee growth rate, growjo.com/company/Crusoe_Energy_Systems
  27. [27] Crusoe Job Posting: Staff PM, Managed Inference (SF/Sunnyvale/NYC), $204K-$247K + RSU, 6+ years technical PM, jobs.ashbyhq.com/Crusoe/6cc6dcf0
  28. [28] Crusoe Job Posting: Group PM, Storage (Denver/Seattle), $206K-$250K + RSU, Block/File/Object storage, jobs.ashbyhq.com/Crusoe/16d96420
  29. [29] Crusoe Job Posting: PM, Pricing/Cloud Economics (Denver/Seattle), $150K-$182K + RSU, jobs.ashbyhq.com/Crusoe/60a48f8a
  30. [30] Crusoe Job Posting: Senior Developer Relations Manager (SF), $160K-$190K + RSU, Python/PyTorch/TensorFlow/JAX, jobs.ashbyhq.com/Crusoe/0555784d
  31. [31] Network World, "AI Clouds Roll In, Challenge Hyperscalers for AI Workloads," CoreWeave $65B, Lambda $2.5B, Nebius $24.3B, employee counts, networkworld.com
  32. [32] Fierce Network, "AI Clouds Ride a Runaway Revenue Growth Train to 2030," $180B by 2030, 69% CAGR, CoreWeave $2.1B H1 2025, Lambda $250M+, Nebius $156M, fierce-network.com

D. Methodology

This report was compiled from 32 primary sources including Crusoe's corporate website, 8 individual job postings (Ashby), press releases, investor announcements, third-party research (Contrary Research, Sacra, McKinsey, Growjo), and industry publications (Data Center Frontier, Fierce Network, Network World, TipRanks). Revenue projections are estimated from Sacra Research. Organizational structure is inferred from the official leadership page and job descriptions. All performance claims are self-reported by Crusoe unless otherwise noted. Report accessed and compiled February 14-16, 2026.