Crusoe is a vertically integrated managed inference platform company[1] that owns energy assets, builds data centers, manufactures its own equipment,[2] and sells GPU compute, managed AI inference, and a full AI platform (Intelligence Foundry) as cloud services.[3] Founded in 2018 as a Bitcoin mining operation using stranded natural gas,[4] the company completed its full pivot to AI infrastructure by March 2025 when it divested its entire Bitcoin division to NYDIG.[5] Crusoe now supports Bring Your Own Model (BYOM) for custom fine-tuned model deployment,[33] and its cloud bookings grew 5x in the first three quarters of 2025.[6]
Crusoe is 18-24 months ahead on cloud platform and managed inference. They have shipped a full IaaS product suite,[3] built proprietary inference technology (MemoryAlloy),[10] launched BYOM custom model support,[33] the Intelligence Foundry model catalog,[22] and secured a $12B OpenAI data center contract.[5] Under Erwan Menard (SVP Product, ex-Google Cloud AI/Vertex AI),[34] they are rapidly building a managed inference platform that competes directly with Together AI, Fireworks AI, and Baseten — not just GPU clouds.
| Name | Title | Background |
|---|---|---|
| Chase Lochmiller | CEO, Co-Founder, Chairman[15] | MIT (math/physics), Stanford (CS/AI), Jump Trading[4] |
| Cully Cavness | President, CSO, Co-Founder[15] | Middlebury, Oxford MBA, energy investment banking[4] |
| Michael Gordon | COO & CFO[15] | Led MongoDB's 2017 IPO[16] |
| Nitin Perumbeti | CTO[15] | Technology leadership |
| Erwan Menard | SVP, Product Management[34] | Ex-Google Cloud AI: Director of PM for Vertex AI (Model Garden, Agent Builder, Search). CEO of Elastifile (acquired by Google, now powers Filestore). Joined Aug 2025.[34] |
| Nadav Eiron | SVP, Cloud Engineering[15] | Cloud platform engineering[17] |
| Chris Dolan | Chief Data Center Officer[15] | DC operations |
| Nick Sammut | SVP, Strategic Finance & Corp Dev[15] | Capital formation, M&A |
| Eesha Pathak | Sr. Director, Product Management[43] | Ex-Google Cloud AI: Head of Product, Enterprise AI & International Expansion. 15+ years product leadership. |
| Aditya Shanker | Group PM, Inference[37] | Inference product lead, co-authored MemoryAlloy launch |
| Omar Lari | Sr. Director PM, IaaS[38] | Infrastructure-as-a-Service product lead |
| Round | Date | Amount | Valuation | Lead Investors |
|---|---|---|---|---|
| Series A[5] | 2019 | $70M | -- | Valor Equity |
| Series B[5] | 2021 | $350M | -- | G2 Venture Partners |
| Series C[5] | 2022 | $505M | $2B+ | -- |
| GPU Loan[5] | Late 2023 | $200M | -- | -- |
| Series D[5] | Mar 2025 | $600M | $2.8B | Founders Fund |
| Series E[6] | Oct 2025 | $1.375B | $10B+ | Mubadala, Valor Equity |
| Total[5] | ~$3.9B |
Crusoe has built a complete IaaS and PaaS offering.[3] Below is the full product stack from managed services down to physical infrastructure.
| GPU | Memory | On-Demand | Spot | Notes |
|---|---|---|---|---|
| NVIDIA GB200 NVL72 | 186 GB | Contact Sales | Contact Sales | Latest generation |
| NVIDIA B200 HGX | 180 GB | Contact Sales | Contact Sales | Blackwell |
| NVIDIA H200 HGX | 141 GB | $4.29/hr | Contact Sales | |
| NVIDIA H100 HGX | 80 GB | $3.90/hr | $1.60/hr | 59% spot discount |
| AMD MI300X | 192 GB | $3.45/hr | $0.95/hr | 72% spot discount |
| NVIDIA A100 SXM | 80 GB | $1.95/hr | $1.30/hr | |
| AMD MI355X | 288 GB | Contact Sales | Contact Sales | Coming Fall 2025 |
Crusoe Managed Inference is a fully managed, API-driven inference service.[14] Customers call an OpenAI-compatible API endpoint. No infrastructure management required. The key technical differentiator is MemoryAlloy, their proprietary distributed KV-cache fabric.[10]
MemoryAlloy decouples KV-cache data from individual GPU processes and exposes them as shared cluster resources.[10] Each node runs a Unified Memory service connected via peer-to-peer discovery, forming a full mesh network.[10] Written in Rust with Python bindings and custom CUDA/ROCm kernels.[10]
| Metric | Improvement | Benchmark Context |
|---|---|---|
| Time-to-First-Token (TTFT) | 9.9x faster vs. vLLM[22] | Llama-3.3-70B, multi-node |
| Throughput (tokens/sec) | 5x higher[14] | Production workloads |
| Local Cache Hit TTFT | 38x faster[10] | 110K-token prompts |
| Remote Cache Hit TTFT | 34x faster[10] | Near-local performance |
| Chat Session TTFT | Sub-150ms[10] | 4-node, Llama-3.3-70B |
| Multi-Node Scaling | Near-linear[10] | Validated 1-8 nodes |
| GB200 NVL72 Fine-Tuning | 3x faster vs H100[47] | Llama 3.1 benchmark, Feb 2026 |
Crusoe achieved ISO 27001 (information security management) and ISO 42001 (AI governance) certifications in February 2026.[50] ISO 42001 is the world's first AI-specific governance standard (ISO/IEC 42001:2023). Crusoe is the only managed inference platform to hold this certification. Combined with existing SOC 2, this positions Crusoe ahead of Together AI (SOC2 only) and at parity with Fireworks (SOC2 + HIPAA + GDPR) on enterprise compliance.
Crusoe supports Bring Your Own Model (BYOM) — customers can deploy their own fine-tuned models on Crusoe's MemoryAlloy-powered infrastructure.[33] The Crusoe team works directly with customers to optimize performance for custom models. Combined with the Intelligence Foundry portal for API key generation, model selection, and endpoint management,[22] this positions Crusoe as a full managed inference platform — not just a GPU cloud with inference endpoints.
| Model | Input ($/1M tokens) | Output ($/1M tokens) | Cached | Max Context |
|---|---|---|---|---|
| Llama 3.3 70B Instruct | $0.25 | $0.75 | $0.13 | 131K |
| DeepSeek V3 0324 | $0.50 | $1.50 | $0.25 | 164K |
| DeepSeek R1 0528 | $1.35 | $5.40 | $0.68 | 164K |
| Qwen3 235B A22B | $0.22 | $0.80 | $0.11 | 262K |
| Kimi-K2 Thinking | $0.60 | $2.50 | $0.30 | 131K |
| GPT-OSS 120B | $0.15 | $0.60 | $0.08 | 131K |
| Gemma 3 12B | $0.08 | $0.30 | $0.04 | 131K |
Crusoe Spark is a turnkey, prefabricated modular AI data center.[19] Self-contained: power, cooling, fire suppression, monitoring, GPU racks.[19] Delivered in as fast as 3 months.[19]
| Partner | Date | Details |
|---|---|---|
| Energy Vault | Feb 2026 | Framework agreement for phased Spark deployment in Snyder, TX. Scalable to 25 MW.[20] |
| Redwood Materials | 2025 | Joint solar/battery-powered Spark deployment[5] |
| Starcloud | Feb 2026 | Crusoe Cloud on satellite. Launch late 2026. First cloud operator in space.[21] |
| Tallgrass | 2025 | 1.8 GW campus in Wyoming, scalable to 10 GW[5] |
This is almost exactly what The platform's modular container infrastructure could deliver. Crusoe has a head start with 400+ deployed units,[19] but The platform's modular infrastructure approach is architecturally similar. The key difference: Crusoe has already wrapped theirs in a cloud platform and managed inference service.[3]
| Department | Est. Open Roles | Signal |
|---|---|---|
| Digital Infrastructure (Construction/Ops) | 30-40 | Massive physical buildout continues |
| Cloud Engineering | 10-15 | Platform scaling |
| Product & Design | 8-10 | Product expansion phase |
| Strategic Finance & Corp Dev | 7+ | IPO Prep[16] |
| Manufacturing | 5-8 | In-house hardware production |
| Procurement & Sourcing | 5+ | Supply chain scaling |
| IT, Compliance, Security | 3-5 | Enterprise readiness |
| Marketing & GTM | 3-5 | Customer acquisition ramp |
| Power Infrastructure | 3-5 | Energy portfolio expansion |
| Role | Location | Salary | What It Tells Us |
|---|---|---|---|
| Staff PM, Managed Inference[27] | SF / NYC | $204K-$247K + RSU[27] | Inference-as-a-Service is the flagship product. Senior IC owning full lifecycle. |
| Group PM, Storage (x2)[11][28] | SF + Denver | $206K-$282K + RSU[11][28] | Building Block, File, Object storage. Two GPM hires = highest priority. |
| Group PM, Security & Compliance[12] | SF | $237K-$288K + RSU[12] | First dedicated security PM.[12] SOC 2 Type II, ISO 27001, HIPAA, FedRAMP roadmap.[12] |
| PM, Pricing / Cloud Economics (x2)[13][29] | SF + Denver | $150K-$209K + RSU[13][29] | Pricing engine, margin optimization, deal desk tooling.[13] |
| Senior DevRel Manager[30] | SF | $160K-$190K + RSU[30] | Developer community (PyTorch, TensorFlow, JAX).[30] Developer-first GTM. |
Based on leadership team data[15] and job descriptions,[9] here is the inferred organizational structure. Green-dashed boxes indicate open roles currently being hired.
AI Clouds are purpose-built AI cloud providers competing with hyperscalers on price and GPU specialization.[31] The market is projected to hit $180B by 2030 at 69% CAGR.[32]
| Metric | CoreWeave | Crusoe | Lambda Labs | Nebius |
|---|---|---|---|---|
| H1 2025 Revenue | $2.1B[32] | ~$500M (est.)[7] | $250M+[32] | $156M[32] |
| Valuation | $65B (public)[31] | $10B+[6] | $2.5B[31] | $24.3B (public)[31] |
| Employees | 1,500+[31] | 1,000+[8] | 500+[31] | 2,000+[31] |
| Key Differentiator | NVIDIA early access[31] | Vertical integration[5] | 1-Click Clusters[31] | Yandex heritage[31] |
| Managed Inference | Yes | Yes (MemoryAlloy)[10] | No | Yes |
| Own Data Centers | Limited | Yes[5] | No | Yes |
| Own Energy | No | Yes (45 GW)[6] | No | No |
| Manufacturing | No | Yes[5] | No | No |
| Anchor Customer | Microsoft | OpenAI/Oracle[5] | AI startups | EU enterprises |
Crusoe is the only AI cloud that is fully vertically integrated from energy production through managed AI services:[5]
| Origin | BTC mining (stranded gas)[4] |
| AI Pivot | 2023 (full exit Mar 2025)[5] |
| Cloud Platform | Live[3] |
| Managed Inference | Live (MemoryAlloy)[14] |
| Chip Partners | NVIDIA (Preferred[5]), AMD[23] |
| DC Scale | 3.4 GW, 9.8M sq ft[5] |
| Revenue | ~$1B (2025)[7] |
| Product Team | SVP + 8+ PMs hiring[15][9] |
| Origin | BTC mining |
| AI Pivot | 2024-2025 (in progress) |
| Cloud Platform | In Development |
| Managed Inference | In Development |
| Chip Partners | Multiple GPU/accelerator vendors |
| DC Scale | Smaller footprint |
| Revenue | Primarily BTC mining |
| Product Team | Building |
Crusoe's managed inference service competes directly with pure-play inference platforms, not just GPU clouds. Under Erwan Menard (SVP Product, ex-Google Cloud AI),[34] Crusoe is building against Together AI, Fireworks AI, Baseten, and Inferact. This is the competitive lens that matters for Crusoe's product org.
| Dimension | Crusoe | Fireworks AI | Together AI | Baseten | Inferact |
|---|---|---|---|---|---|
| Core Engine | MemoryAlloy (Rust, custom CUDA)[10] | FireAttention V4 (custom CUDA, FP4)[39] | FlashAttention-3/4 (Tri Dao)[40] | Custom C++ + TensorRT-LLM[41] | vLLM (PagedAttention)[42] |
| BYOM Support | Yes[33] | Yes | Yes | Yes | Yes |
| Model Catalog | 8+ models (Intelligence Foundry)[22] | 100+ models[39] | 200+ models[40] | Custom deployments[41] | vLLM-based[42] |
| Llama 3.3 70B Input | $0.25/M[22] | $0.20/M[39] | $0.20/M[40] | Pay-per-use[41] | Enterprise[42] |
| Key TTFT Claim | 9.9x vs vLLM[10] | Fastest compound AI[39] | FlashAttention-optimized | 99.99% uptime SLA[41] | Open-source baseline |
| Owns Infrastructure | Yes (DCs + Energy)[5] | No | No | No | No |
| Valuation | $10B+[6] | $4B[39] | $3.3B[40] | $5B[41] | $800M[42] |
| NVIDIA Backing | Investor[6] | No | No | $150M investment[41] | No |
Crusoe is the only managed inference platform that owns its infrastructure stack end-to-end. While Fireworks, Together, and Baseten rent GPUs from cloud providers, Crusoe owns the energy, data centers, and hardware. This creates a structural cost advantage that pure-play inference platforms cannot match at scale.
| # | Decision | Impact |
|---|---|---|
| 1 | Divested Bitcoin completely (Mar 2025)[5] | Clear signal to investors, customers, talent. Valuation: $2.8B[5] to $10B+[6] in 7 months. |
| 2 | Built managed services, not just raw compute[17] | Higher margins, stickier customers. Managed Inference[14] + AutoClusters[17] + Managed K8s.[24] |
| 3 | Invested in proprietary technology (MemoryAlloy)[10] | 9.9x TTFT improvement.[22] Real engineering moat. Rust-based, custom CUDA kernels.[10] |
| 4 | Hired product leadership early[15] | SVP Product,[15] SVP Cloud Eng,[15] multiple GPMs/Staff PMs[9] before shipping. |
| 5 | Leveraged existing hardware capability[5] | Easter-Owens acquisition: cut vendor lead times from 100 weeks to 22 weeks.[5] |
| # | Vulnerability | Opportunity |
|---|---|---|
| 1 | Single-vendor GPU dependency (NVIDIA + AMD only)[23] | A multi-chip architecture offers workload-optimal routing |
| 2 | Heavy debt load (~$300M annual interest)[5] | The platform can target sustainable margins from day one |
| 3 | GPU pricing compression ($8/hr to $2/hr historically)[5] | Multi-chip flexibility hedges against single-vendor price erosion |
| 4 | HIPAA/FedRAMP gaps remain (ISO 27001+42001 achieved, but healthcare/gov verticals need more)[50] | The platform can pursue HIPAA and FedRAMP certification faster |
| 5 | Not yet profitable (rapid growth + massive capex)[5] | The platform's energy-first cost structure is more sustainable |
Crusoe proved the market.[14] A multi-chip architecture is a genuine differentiator. Ship the product.
Their distributed KV-cache is the right technical direction.[10] Evaluate build vs. partner for The platform's equivalent.
Crusoe achieved ISO 27001 + ISO 42001 in Feb 2026[50] — closing a major enterprise gap. HIPAA and FedRAMP remain open. The platform should pursue HIPAA-ready inference to capture healthcare verticals.
| Location | Capacity | Power Source | Status |
|---|---|---|---|
| Abilene, TX[6] | 1.2 GW | Grid + renewables | Phase 1 Live |
| Wyoming (Tallgrass)[5] | 1.8 GW (to 10 GW) | Grid + renewables | Under Construction |
| Snyder, TX (Energy Vault)[20] | 25 MW initial | Spark modular | Deploying 2026 |
| Norway[5] | 12 MW (to 52 MW) | Hydroelectric | Operational |
| Iceland (ICE02)[5] | Expansion | Geothermal + hydro | Expanding |
| Satellite (Starcloud)[21] | Limited GPU | Solar | Launch Late 2026 |
| Segment | Customer | Relationship |
|---|---|---|
| Hyperscaler | OpenAI / Oracle (Stargate)[5] | $12B campus build + operations |
| AI Coding | Anysphere / Cursor[6] | Cloud compute + inference customer |
| AI Platform | Together AI[6] | Cloud compute customer |
| AI Platform | Fireworks AI[6] | Cloud compute customer |
| AI Coding | Windsurf[5] | Cloud compute customer |
| AI Startup | Decart[5] | Exclusive model partner (MirageLSD) |
| AI Startup | Odyssey[6] | Cloud compute customer |
| Managed Inference | Wonderful.ai[14] | MemoryAlloy inference customer (agents at scale) |
| Managed Inference | Yutori[14] | Inference customer (performance optimization) |
| Managed Inference | Oaklet[14] | Inference customer (record processing) |
| Enterprise | Sony[5] | Cloud compute customer |
| Enterprise | Databricks[5] | Cloud compute customer |
| Research | MIT[5] | Academic partnership |
| Validation | Meta (PyTorch team)[17] | "1600 GPUs via Slurm just worked" |
This report was compiled from 51 primary sources including Crusoe's corporate website, product pages, engineering blog (10 blog posts from Oct 2025-Feb 2026), 8 individual job postings (Ashby), press releases, investor announcements, third-party research (Contrary Research, Sacra, McKinsey, Growjo, ZoomInfo), and industry publications (Data Center Frontier, Fierce Network, Network World, TipRanks, Techstrong.ai). Revenue projections are estimated from Sacra Research. Organizational structure is inferred from the official leadership page, ZoomInfo, and job descriptions. Managed inference competitive positioning data sourced from public pricing pages and company websites. All performance claims are self-reported by Crusoe unless otherwise noted. Report originally compiled February 14-16, 2026; updated February 20, 2026 with managed inference platform analysis, BYOM coverage, leadership additions, competitive positioning map, Feb 2026 product launches (ISO 27001+42001, Command Center, MCP Server, GB200 benchmarks, AMD SkyPilot), and Eesha Pathak leadership addition.