Competitive Intelligence Report
Crusoe IaaS Strategy Analysis
How an energy-first AI cloud is building a full AI infrastructure platform — and what it means for the platform
February 16, 2026
Analyst: MinjAI Agents
For: AI Infrastructure Strategy & Product Leaders
Page 1 of 10
Executive Summary
Crusoe is a vertically integrated "AI cloud"[1] that owns energy assets, builds data centers, manufactures its own equipment,[2] and sells GPU compute and managed AI inference as cloud services.[3] Founded in 2018 as a Bitcoin mining operation using stranded natural gas,[4] the company completed its full pivot to AI infrastructure by March 2025 when it divested its entire Bitcoin division to NYDIG.[5]
Strategic Implications
Crusoe is 18-24 months ahead of the platform on cloud platform and managed inference. They have shipped a full IaaS product suite,[3] built proprietary inference technology (MemoryAlloy),[10] and secured a $12B OpenAI data center contract.[5] Their hiring reveals they are now building enterprise storage,[11] security/compliance certifications,[12] and pricing optimization.[13] The platform should treat this as the primary competitive benchmark for its IaaS strategy.
Five Things Action Items
- Accelerate managed inference launch. Crusoe proved the market.[14] A multi-chip architecture is a genuine differentiator. Ship it.
- Study MemoryAlloy architecture. Their distributed KV-cache achieves 9.9x TTFT improvement.[10] Evaluate build vs. partner.
- Lead with compliance. Crusoe is hiring their first security PM now.[12] The platform can get ahead on SOC 2, HIPAA, FedRAMP.
- Build the product team. Crusoe has an SVP of Product,[15] multiple GPMs, Staff PMs.[9] The platform needs equivalent leadership.
- Consider a clean break from BTC positioning. Crusoe's valuation went from $2.8B to $10B+ in 7 months after divesting Bitcoin.[6][5]
Page 3 of 10
Product Architecture and Technical Stack
Crusoe has built a complete IaaS and PaaS offering.[3] Below is the full product stack from managed services down to physical infrastructure.
Managed Inference (MemoryAlloy engine)
[10]
Intelligence Foundry (Model catalog + API portal)
[22]
Provisioned Throughput
[23]
Batch API
Coming Soon[22]
Managed Kubernetes (CMK)
[24]
AutoClusters (Fault-tolerant orchestration)
[17]
NVIDIA Run:ai Integration
[24]
Cluster Observability (NVIDIA DCGM)
[17]
Console, CLI, APIs, Terraform, SDKs
[3]
GPU Instances (NVIDIA GB200, B200, H200, H100, A100, L40S, A40)
[23]
GPU Instances (AMD MI355X, MI300X)
[23]
CPU Instances (General + Storage-optimized)
[23]
Block Storage (Persistent Disks)
[23] Building[11]
File Storage (Shared Disks)
[23] Building[11]
Object Storage
Building[11]
Global Backbone (NA + Europe)
[3]
Topology-Aware GPU Placement
[3]
Hyperscale Campuses (Abilene 1.2GW
[6], Wyoming 1.8GW
[6])
Crusoe Spark (Modular AI Factory, 400+ units)
[19]
In-House Manufacturing (Easter-Owens)
[5]
Power: Gas, Solar, Wind, Hydro, Geothermal
[5]
GPU Pricing (On-Demand)[23]
| GPU | Memory | On-Demand | Spot | Notes |
| NVIDIA GB200 NVL72 | 186 GB | Contact Sales | Contact Sales | Latest generation |
| NVIDIA B200 HGX | 180 GB | Contact Sales | Contact Sales | Blackwell |
| NVIDIA H200 HGX | 141 GB | $4.29/hr | Contact Sales | |
| NVIDIA H100 HGX | 80 GB | $3.90/hr | $1.60/hr | 59% spot discount |
| AMD MI300X | 192 GB | $3.45/hr | $0.95/hr | 72% spot discount |
| NVIDIA A100 SXM | 80 GB | $1.95/hr | $1.30/hr | |
| AMD MI355X | 288 GB | Contact Sales | Contact Sales | Coming Fall 2025 |
Key Pricing Differentiators
- No data transfer charges (ingress or egress)[23] — major advantage vs. hyperscalers
- Per-minute billing, no upfront setup fees[23]
- Spot instances up to 90% off hyperscaler on-demand pricing[25]
- 99.98% uptime with automatic node swapping[3]
Page 4 of 10
Managed Inference Deep Dive
Crusoe Managed Inference is a fully managed, API-driven inference service.[14] Customers call an OpenAI-compatible API endpoint. No infrastructure management required. The key technical differentiator is MemoryAlloy, their proprietary distributed KV-cache fabric.[10]
How MemoryAlloy Works
Architecture Overview
MemoryAlloy decouples KV-cache data from individual GPU processes and exposes them as shared cluster resources.[10] Each node runs a Unified Memory service connected via peer-to-peer discovery, forming a full mesh network.[10] Written in Rust with Python bindings and custom CUDA/ROCm kernels.[10]
Core Technical Components
- Cluster-Wide Cache: Instead of each GPU maintaining isolated KV cache, MemoryAlloy creates a shared memory pool across all cluster nodes. An 8-node H100 cluster provides 6-1.4 TB unified KV storage vs. 640 GB-1.4 GB isolated per node.[10]
- Multi-Rail Data Movement: Distributes transfers across PCIe lanes, NVLink, and network adapters in parallel. Achieves 80-130 GB/s per GPU (vs. ~46 GB/s single link). Aggregate: 250+ GB/s for 8-GPU transfers.[10]
- KV-Aware Gateway: Routes requests to the node that already has relevant prefix cache data. Estimates prefill cost per request and picks the engine that delivers earliest first-token.[10]
- Shadow Pools & Send Graph: Pre-allocated GPU memory staging. DAG-based pipelined data movement. Eliminates NIC registration overhead.[10]
Performance Claims (Self-Reported)[10][22]
| Metric | Improvement | Benchmark Context |
| Time-to-First-Token (TTFT) | 9.9x faster vs. vLLM[22] | Llama-3.3-70B, multi-node |
| Throughput (tokens/sec) | 5x higher[14] | Production workloads |
| Local Cache Hit TTFT | 38x faster[10] | 110K-token prompts |
| Remote Cache Hit TTFT | 34x faster[10] | Near-local performance |
| Chat Session TTFT | Sub-150ms[10] | 4-node, Llama-3.3-70B |
| Multi-Node Scaling | Near-linear[10] | Validated 1-8 nodes |
Supported Models and Pricing[22][23]
| Model | Input ($/1M tokens) | Output ($/1M tokens) | Cached | Max Context |
| Llama 3.3 70B Instruct | $0.25 | $0.75 | $0.13 | 131K |
| DeepSeek V3 0324 | $0.50 | $1.50 | $0.25 | 164K |
| DeepSeek R1 0528 | $1.35 | $5.40 | $0.68 | 164K |
| Qwen3 235B A22B | $0.22 | $0.80 | $0.11 | 262K |
| Kimi-K2 Thinking | $0.60 | $2.50 | $0.30 | 131K |
| GPT-OSS 120B | $0.15 | $0.60 | $0.08 | 131K |
| Gemma 3 12B | $0.08 | $0.30 | $0.04 | 131K |
Page 5 of 10
Crusoe Spark: The Edge and Modular Play
Crusoe Spark is a turnkey, prefabricated modular AI data center.[19] Self-contained: power, cooling, fire suppression, monitoring, GPU racks.[19] Delivered in as fast as 3 months.[19]
Hyperscale Campus
| Scale | 1+ GW[6] |
| Timeline | 12-24 months[5] |
| Use Case | Training, large-scale inference |
| Examples | Abilene (1.2 GW)[6], Wyoming (1.8 GW)[6] |
Crusoe Spark (Modular)
| Scale | 1-25 MW per site[20] |
| Timeline | 3 months[19] |
| Use Case | Edge inference, capacity expansion |
| Deployed | 400+ units globally[19] |
Target Markets[19]
- Autonomous vehicles — Low-latency edge inference
- Healthcare — Real-time patient monitoring, HIPAA environments
- Manufacturing — Predictive maintenance at the factory floor
- Enterprise on-prem AI — Customers who cannot send data to public cloud
Recent Partnerships
| Partner | Date | Details |
| Energy Vault | Feb 2026 | Framework agreement for phased Spark deployment in Snyder, TX. Scalable to 25 MW.[20] |
| Redwood Materials | 2025 | Joint solar/battery-powered Spark deployment[5] |
| Starcloud | Feb 2026 | Crusoe Cloud on satellite. Launch late 2026. First cloud operator in space.[21] |
| Tallgrass | 2025 | 1.8 GW campus in Wyoming, scalable to 10 GW[5] |
Strategic Relevance
This is almost exactly what The platform's modular container infrastructure could deliver. Crusoe has a head start with 400+ deployed units,[19] but The platform's modular infrastructure approach is architecturally similar. The key difference: Crusoe has already wrapped theirs in a cloud platform and managed inference service.[3]
Page 6 of 10
Hiring Analysis: Size, Shape, and Signals
Open Positions by Department[9]
| Department | Est. Open Roles | Signal |
| Digital Infrastructure (Construction/Ops) | 30-40 | Massive physical buildout continues |
| Cloud Engineering | 10-15 | Platform scaling |
| Product & Design | 8-10 | Product expansion phase |
| Strategic Finance & Corp Dev | 7+ | IPO Prep[16] |
| Manufacturing | 5-8 | In-house hardware production |
| Procurement & Sourcing | 5+ | Supply chain scaling |
| IT, Compliance, Security | 3-5 | Enterprise readiness |
| Marketing & GTM | 3-5 | Customer acquisition ramp |
| Power Infrastructure | 3-5 | Energy portfolio expansion |
Product & Design Open Roles (Detailed)
| Role | Location | Salary | What It Tells Us |
| Staff PM, Managed Inference[27] |
SF / NYC |
$204K-$247K + RSU[27] |
Inference-as-a-Service is the flagship product. Senior IC owning full lifecycle. |
| Group PM, Storage (x2)[11][28] |
SF + Denver |
$206K-$282K + RSU[11][28] |
Building Block, File, Object storage. Two GPM hires = highest priority. |
| Group PM, Security & Compliance[12] |
SF |
$237K-$288K + RSU[12] |
First dedicated security PM.[12] SOC 2 Type II, ISO 27001, HIPAA, FedRAMP roadmap.[12] |
| PM, Pricing / Cloud Economics (x2)[13][29] |
SF + Denver |
$150K-$209K + RSU[13][29] |
Pricing engine, margin optimization, deal desk tooling.[13] |
| Senior DevRel Manager[30] |
SF |
$160K-$190K + RSU[30] |
Developer community (PyTorch, TensorFlow, JAX).[30] Developer-first GTM. |
Five Signals from Hiring Patterns
- Storage is the next major product area. Two GPM hires for Block/File/Object[11] = building AWS EBS/S3/EFS equivalent.
- Security/Compliance is gating enterprise deals. First dedicated PM.[12] Need SOC 2, HIPAA, FedRAMP.[12]
- Pricing is a strategic weapon. Two dedicated PMs building sophisticated models.[13]
- Inference is the flagship. Staff PM at $204K-$247K.[27] Not a side project.
- DevRel signals developer-first GTM. Winning open-source ML community is the strategy.[30]
Page 7 of 10
Organizational Structure and Product Org Map
Based on leadership team data[15] and job descriptions,[9] here is the inferred organizational structure. Green-dashed boxes indicate open roles currently being hired.
SVP, Power Infrastructure
PRODUCT & DESIGN TEAM (REPORTING TO ERWAN MENARD, SVP PRODUCT)[15]
Staff PM, Managed Inference
$204K-$247K
Group PM, Storage
$206K-$282K
Group PM, Security & Compliance
$237K-$288K
PM, Pricing & Cloud Economics
$150K-$209K
Sr DevRel Manager
$160K-$190K
CLOUD ENGINEERING (REPORTING TO NADAV EIRON, SVP CLOUD ENG)[15]
Page 8 of 10
Competitive Positioning: AI Cloud Landscape
AI Clouds are purpose-built AI cloud providers competing with hyperscalers on price and GPU specialization.[31] The market is projected to hit $180B by 2030 at 69% CAGR.[32]
Crusoe vs. AI Cloud Peers
| Metric | CoreWeave | Crusoe | Lambda Labs | Nebius |
| H1 2025 Revenue | $2.1B[32] | ~$500M (est.)[7] | $250M+[32] | $156M[32] |
| Valuation | $65B (public)[31] | $10B+[6] | $2.5B[31] | $24.3B (public)[31] |
| Employees | 1,500+[31] | 1,000+[8] | 500+[31] | 2,000+[31] |
| Key Differentiator | NVIDIA early access[31] | Vertical integration[5] | 1-Click Clusters[31] | Yandex heritage[31] |
| Managed Inference | Yes | Yes (MemoryAlloy)[10] | No | Yes |
| Own Data Centers | Limited | Yes[5] | No | Yes |
| Own Energy | No | Yes (45 GW)[6] | No | No |
| Manufacturing | No | Yes[5] | No | No |
| Anchor Customer | Microsoft | OpenAI/Oracle[5] | AI startups | EU enterprises |
Crusoe's Unique Position
Crusoe is the only AI cloud that is fully vertically integrated from energy production through managed AI services:[5]
- Structural cost advantage: Owns the power (a significant portion of inference cost)[5]
- Speed: In-house manufacturing cuts vendor lead times from 100 weeks to 22 weeks[5]
- Edge capability: Crusoe Spark enables rapid distributed deployments[19]
- Sustainability: Clean energy positioning wins ESG-conscious enterprise buyers[18]
Crusoe vs. Inference Platform: Head-to-Head
Crusoe
| Origin | BTC mining (stranded gas)[4] |
| AI Pivot | 2023 (full exit Mar 2025)[5] |
| Cloud Platform | Live[3] |
| Managed Inference | Live (MemoryAlloy)[14] |
| Chip Partners | NVIDIA (Preferred[5]), AMD[23] |
| DC Scale | 3.4 GW, 9.8M sq ft[5] |
| Revenue | ~$1B (2025)[7] |
| Product Team | SVP + 8+ PMs hiring[15][9] |
Platform
| Origin | BTC mining |
| AI Pivot | 2024-2025 (in progress) |
| Cloud Platform | In Development |
| Managed Inference | In Development |
| Chip Partners | Multiple GPU/accelerator vendors |
| DC Scale | Smaller footprint |
| Revenue | Primarily BTC mining |
| Product Team | Building |
The platform's Potential Advantages
- Multi-chip architecture with multiple GPU/accelerator vendors enables workload-optimal routing. Crusoe is NVIDIA + AMD only.[23]
- Dedicated/sovereign environments for compliance-heavy verticals. Crusoe is just starting to hire for security/compliance.[12]
- Cost discipline: The platform's energy ownership enables structurally lower cost of compute. Crusoe has ~$300M/year interest expense.[5]
Page 10 of 10
Appendix
A. Data Center Locations
| Location | Capacity | Power Source | Status |
| Abilene, TX[6] | 1.2 GW | Grid + renewables | Phase 1 Live |
| Wyoming (Tallgrass)[5] | 1.8 GW (to 10 GW) | Grid + renewables | Under Construction |
| Snyder, TX (Energy Vault)[20] | 25 MW initial | Spark modular | Deploying 2026 |
| Norway[5] | 12 MW (to 52 MW) | Hydroelectric | Operational |
| Iceland (ICE02)[5] | Expansion | Geothermal + hydro | Expanding |
| Satellite (Starcloud)[21] | Limited GPU | Solar | Launch Late 2026 |
B. Key Customers[5]
| Segment | Customer | Relationship |
| Hyperscaler | OpenAI / Oracle (Stargate)[5] | $12B campus build + operations |
| AI Startup | Anysphere / Cursor[5] | Cloud compute customer |
| AI Startup | Together AI[5] | Cloud compute customer |
| AI Startup | Windsurf[5] | Cloud compute customer |
| AI Startup | Decart[5] | Exclusive model partner |
| Enterprise | Sony[5] | Cloud compute customer |
| Enterprise | Databricks[5] | Cloud compute customer |
| Research | MIT[5] | Academic partnership |
| Validation | Meta (PyTorch team)[17] | "1600 GPUs via Slurm just worked" |
D. Methodology
This report was compiled from 32 primary sources including Crusoe's corporate website, 8 individual job postings (Ashby), press releases, investor announcements, third-party research (Contrary Research, Sacra, McKinsey, Growjo), and industry publications (Data Center Frontier, Fierce Network, Network World, TipRanks). Revenue projections are estimated from Sacra Research. Organizational structure is inferred from the official leadership page and job descriptions. All performance claims are self-reported by Crusoe unless otherwise noted. Report accessed and compiled February 14-16, 2026.