Competitive Intelligence Report

SambaNova Systems: Custom Silicon Analysis

A $5B-peak AI chip startup collapsed to a $1.6B Intel offer. What SambaNova's trajectory reveals about custom silicon risk and why it validates The platform's GPU-agnostic strategy.

February 16, 2026 Analyst: MinjAI Agents For: AI Infrastructure Strategy & Product Leaders
25 Footnoted Sources
Page 1 of 10

Executive Summary

SambaNova Systems is a custom silicon AI startup founded in 2017 by Stanford professors Kunle Olukotun and Christopher Ré, along with former Oracle SVP Rodrigo Liang.[1] The company built the Reconfigurable Dataflow Unit (RDU), a purpose-built AI processor that takes a fundamentally different architectural approach from NVIDIA GPUs.[2] After raising $1.14B and achieving a $5B peak valuation in 2021, SambaNova struggled to convert technical differentiation into sustainable commercial traction.[3]

By late 2025, Intel entered advanced acquisition talks at $1.6B including debt, a 68% decline from peak valuation.[4] Those talks stalled in January 2026, and SambaNova pivoted to raising $350M+ from Vista Equity Partners and Intel in a new Series E round.[5] The company has laid off 15% of its workforce and pivoted from training to inference-only.[6]

$1.6B[4]
Intel Offer (Dec 2025)
$5.0B[3]
Peak Valuation (2021)
$1.14B[3]
Total Funding Raised
~400[6]
Employees (Post-Layoffs)
-68%[4]
Valuation Decline
LOW
Threat Level to the platform
Strategic Implications

SambaNova is a cautionary tale, not a competitive threat. Their trajectory from $5B to $1.6B validates The platform's core thesis: betting on a single custom silicon architecture creates vendor lock-in risk that enterprise buyers increasingly reject. The platform's GPU-agnostic, multi-chip approach (NVIDIA, alternative silicon) is the correct strategic posture. SambaNova's talent pool and distressed IP represent potential acqui-hire and partnership opportunities rather than competitive risks.

Five Things Action Items

  1. Monitor the Vista/Intel funding round. If completed, SambaNova survives as a niche inference vendor. If it fails, fire-sale IP and talent become available. Either outcome benefits the platform.
  2. Evaluate SambaNova as a chip partner, not a competitor. The SN40L RDU's 4x energy efficiency advantage[7] could complement A multi-chip inference stack for specific workloads.
  3. Recruit selectively from SambaNova. Post-layoff talent pool includes world-class dataflow architects and compiler engineers. Target 3-5 key hires in chip integration and inference optimization.
  4. Use SambaNova's story in customer conversations. Their struggles illustrate why enterprises need a platform-agnostic inference provider like The platform, not a hardware vendor with a cloud wrapper.
  5. Track the Lip-Bu Tan conflict of interest. Intel's CEO serving as SambaNova's chairman creates governance complexity[8] that may slow both companies' AI strategies.
Page 2 of 10

Company Overview and History

SambaNova was born from Stanford University's Pervasive Parallelism Laboratory, where co-founders Kunle Olukotun (known as the "father of the multi-core processor") and Christopher Ré (a MacArthur "Genius" Award recipient) developed the theoretical foundations for dataflow computing applied to AI workloads.[1] Rodrigo Liang, who had led nearly 1,000 chip designers at Oracle as SVP of Hardware Development, joined as CEO to commercialize the technology.[9]

The company name means "new dance" in Portuguese, reflecting Liang's vision of a fundamentally different approach to AI computation.[9]

Leadership Team

Lip-Bu Tan
Executive Chairman[8]
Rodrigo Liang
Co-Founder & CEO[9]
Kunle Olukotun
Co-Founder & Chief Technologist[1]
Christopher Ré
Co-Founder[1]
NameTitleBackground
Lip-Bu TanExecutive Chairman[8]CEO of Intel; former CEO of Cadence Design; appointed May 2024. Major conflict-of-interest concern.
Rodrigo LiangCo-Founder & CEO[9]20+ years semiconductor engineering. SVP Hardware at Oracle/Sun Microsystems. Led ~1,000 chip designers.
Kunle OlukotunCo-Founder & Chief Technologist[1]Stanford Professor of EE/CS. "Father of the multi-core processor." Director of Stanford Pervasive Parallelism Lab.
Christopher RéCo-Founder[1]Stanford Associate Professor of CS. MacArthur Fellow. Affiliated with Stanford AI Lab.
Amarjit GillBoard Director[8]Venture investor
Catriona Mary FallonBoard Director[8]Appointed August 2024
Governance Red Flag: Lip-Bu Tan Dual Role

Intel CEO Lip-Bu Tan simultaneously serves as SambaNova's Executive Chairman. When Intel pursued a $1.6B acquisition of SambaNova in December 2025, Reuters reported that the deal would have directly boosted Tan's personal fortune.[8] This conflict of interest contributed to regulatory and board scrutiny, and the acquisition talks ultimately stalled. Tan also served on SoftBank's board until 2022; SoftBank is both an Intel investor and SambaNova's largest backer via Vision Fund 2.

Timeline: From Stanford Lab to Existential Crisis

2017
Founded by Olukotun, Ré, and Liang from Stanford's Pervasive Parallelism Lab.[1] Emerged from stealth with a vision for dataflow-based AI accelerators.
Mar 2018
Series A: $56M from Walden International and GV (Google Ventures).[3]
Apr 2019
Series B: $150M. Intel Capital makes first investment.[10]
Feb 2020
Series C: $250M at ~$2.5B valuation. BlackRock enters as investor.[3]
Apr 2021
Peak: Series D at $5B valuation. $676M led by SoftBank Vision Fund 2. Also Temasek, GIC, BlackRock.[3]
Sep 2023
Launched SN40L (4th-gen RDU). Pivot begins from training-focused to inference-focused product strategy.[2]
Sep 2024
Launched SambaNova Cloud (SambaCloud). First cloud-based inference API offering.[11]
Apr 2025
Laid off 77 employees (15% of workforce). Pivoted fully from training to inference.[6]
May 2024
Lip-Bu Tan appointed Executive Chairman.[8]
Dec 2025
Intel enters advanced acquisition talks at ~$1.6B (including debt). Non-binding term sheet signed Dec 9.[4]
Jan 2026
Intel acquisition talks stall. SambaNova pivots to seeking $500M fundraise.[5]
Feb 2026
Vista Equity Partners leads $350M+ Series E with Intel investing $100-150M. Round oversubscribed.[5]
Page 3 of 10

Funding History and Valuation Trajectory

Funding Rounds

RoundDateAmountValuationLead / Key Investors
Series A[3]Mar 2018$56M--Walden International, GV (Google Ventures)
Series B[10]Apr 2019$150M--Intel Capital, GV
Series C[3]Feb 2020$250M~$2.5BBlackRock, existing investors
Series D[3]Apr 2021$676M$5.0BSoftBank Vision Fund 2, Temasek, GIC, BlackRock
Series E[5]Feb 2026$350M+TBDVista Equity Partners, Intel ($100-150M)
Total~$1.5B+

Key Investor Analysis

InvestorStakeStrategic Interest
SoftBank Vision Fund 2Largest (led $676M Series D)[3]AI hardware portfolio play. Massive unrealized loss at $1.6B vs. $5B entry.
Intel CapitalEarly investor (Series B)[10]Strategic: AI inference accelerator for Intel ecosystem. Lip-Bu Tan connection.
Vista Equity PartnersNew lead (Series E)[5]Enterprise software PE firm. Via partnership with Cambium Capital.
BlackRockSeries C, D participant[3]Financial investor. Likely underwater on position.
Temasek / GICSeries D participants[3]Singapore sovereign wealth. AI infrastructure thesis.
GV (Google Ventures)Series A, B[3]Strategic hedge. Google also builds own TPUs internally.
Valuation Destruction: $5B to $1.6B

SambaNova's 68% valuation decline from peak ($5B in April 2021) to Intel's $1.6B offer (December 2025) is one of the largest value destructions among well-funded AI chip startups. For context:

  • $1.14B raised vs. $1.6B offer means investors barely break even before preference stacks and debt.
  • SoftBank Vision Fund 2 led the $676M Series D at $5B. At $1.6B, their position is deeply underwater.
  • The pivot to a $350M+ Series E from Vista suggests the company can survive, but at what valuation remains unclear.
  • Contrast with Cerebras ($7.6B), Groq ($2.8B), and NVIDIA ($3.4T market cap) in the AI accelerator space.

Revenue and Commercial Traction

SambaNova has never publicly disclosed revenue figures. The company's commercial strategy has shifted significantly over its history:

Strategic Implication

SambaNova's lack of disclosed revenue despite $1.14B in funding is a significant signal. It suggests that custom silicon companies face enormous difficulty in building recurring revenue businesses outside of government contracts. The platform's inference-as-a-service model, which is hardware-agnostic and API-first, avoids this trap.

Page 4 of 10

Product Architecture: The SambaNova Stack

SambaNova offers three main products, all powered by the SN40L RDU chip. The product lineup has evolved significantly as the company pivoted from hardware sales to managed services.[11]

Layer 4: Cloud API Services[11]
SambaCloud (API inference service)
OpenAI-compatible API
Free tier (20 req/min, 50/day)
Developer tier ($5 credits)
Enterprise tier (custom)
Layer 3: Managed Services[12]
SambaManaged (turnkey DC inference)
90-day deployment promise
SambaStudio (on-premise platform)
Composition of Experts (CoE) routing
Layer 2: Software Platform
SambaNova SDK
Dataflow compiler
Model optimization pipeline
Samba-1 (CoE foundation model)
Open model support (Llama, DeepSeek, Qwen)
Layer 1: Custom Silicon
SN40L RDU (4th-gen chip)[2]
TSMC 5nm process
SambaRack SN40L-16 (16 RDUs per rack)
Air-cooled, standard 19" form factor
3-tier memory architecture

Product Descriptions

SambaCloud (Launched Sep 2024)

Cloud-based inference API service. Developers access SambaNova's RDU-powered inference via a standard OpenAI-compatible API.[11] Supports DeepSeek R1 671B, Llama 3.x family, Qwen, and other open-source models. Claims to be the fastest inference platform for large models.

SambaManaged (Launched Jul 2025)

A "turnkey" managed inference cloud for data centers. SambaNova ships, installs, and manages SambaRack hardware in the customer's existing data center.[12] Claims 90-day deployment vs. the typical 18-24 month cycle. Targets enterprises and sovereign governments that need on-premise AI but lack hardware expertise.

SambaStack (On-Premise)

Full-stack enterprise AI platform for customers who want to purchase and manage their own SambaNova hardware. Includes SambaStudio software, model management, and optimization tools. Previously the core product; now de-emphasized in favor of managed offerings.

Composition of Experts (CoE): Key Technical Differentiator

SambaNova's most distinctive software innovation is their Composition of Experts model architecture.[13] Unlike traditional Mixture of Experts (MoE), CoE aggregates multiple small "expert" models (e.g., 150 x 7B models = 1 trillion parameter system) with a router model that directs queries to the optimal expert. The SN40L's three-tier memory enables 15-31x faster model switching vs. GPU-based approaches, because experts can be hot-loaded from DDR to HBM in microseconds rather than milliseconds.

Page 5 of 10

SN40L RDU: Technical Deep Dive

The SN40L is SambaNova's fourth-generation Reconfigurable Dataflow Unit, manufactured on TSMC's 5nm process.[2] It represents a fundamentally different architectural approach from GPUs: instead of executing programs as sequences of kernel launches, the RDU maps entire neural network computation graphs directly into hardware as streaming dataflows.[14]

Core Specifications

ParameterSN40L RDUComparator: NVIDIA H200
ArchitectureStreaming Dataflow[14]GPU (CUDA cores)
Process NodeTSMC 5nm[2]TSMC 4nm
Transistor Count102 billion[2]80 billion
Compute Units1,040 PCUs + 1,040 PMUs[14]16,896 CUDA + 528 Tensor Cores
Peak Compute638 BF16 TFLOPS[14]989 BF16 TFLOPS
On-Chip SRAM520 MiB (PMU SRAM)[14]~50 MB L2 Cache
HBM64 GiB HBM[14]141 GB HBM3e
DDR DRAMUp to 1.5 TiB[14]N/A (system RAM separate)
TDP~600W per chip[15]700W
CoolingAir-cooled[15]Liquid-cooled (typically)

Three-Tier Memory Architecture

The SN40L's most significant architectural innovation is its three-tier memory hierarchy, which directly addresses the "memory wall" problem that limits GPU inference performance for large models.[14]

Tier 1
On-Chip SRAM
520 MiB Distributed across 1,040 PMUs. Hundreds of TB/s bandwidth. Lowest latency, highest bandwidth. Used for active computation and hot data.
Tier 2
Co-Packaged HBM
64 GiB High Bandwidth Memory co-packaged on the same substrate. Standard for model weights and KV-cache during inference.
Tier 3
DDR DRAM
Up to 1.5 TiB Pluggable DDR DIMMs. Over 1 TB/s transfer rate to HBM.[14] Enables holding entire model libraries in memory for instant switching. This is the CoE enabler.

Dataflow Architecture vs. GPU Execution

SambaNova RDU (Dataflow)

  • Maps entire computation graphs to hardware
  • Data streams between operations on-chip
  • Fuses hundreds of operations into single "kernel"
  • Pipeline, data, and tensor parallelism within chip
  • No kernel launch overhead
  • Compiler handles optimization automatically

NVIDIA GPU (CUDA)

  • Executes discrete kernels sequentially
  • Data moves to/from global memory between ops
  • Manual kernel fusion for optimization
  • Parallelism managed via software (NCCL, etc.)
  • Kernel launch overhead per operation
  • Requires expert CUDA optimization

SambaRack SN40L-16 System

ParameterSpecification
RDUs per Rack16 SN40L chips[15]
Aggregate Compute10.2 PFLOPS (BF16)[15]
Total HBM1,024 GiB (16 x 64 GiB)
Total DDRUp to 24 TiB (16 x 1.5 TiB)
InterconnectPeer-to-peer RDU network
Form FactorStandard 19" rack, air-cooled[15]
Power Consumption~10 kW average[15]
Air-Cooling Advantage

Unlike NVIDIA's latest Blackwell systems (GB200 NVL72) which require liquid cooling, SambaNova's SambaRack operates with standard air cooling in a 19" rack.[15] This significantly lowers deployment complexity and cost, and is directly relevant to The platform's air-cooled infrastructure strategy. A single SambaRack consuming ~10 kW can run DeepSeek R1 671B at 198 tokens/sec, whereas comparable GPU systems would require multiple racks and significantly more power.

Page 6 of 10

Performance Benchmarks and Technical Claims

SambaNova has made aggressive performance claims, particularly around inference speed and energy efficiency. Many of these are self-reported and should be validated independently. However, the Stanford HAI benchmarks provide third-party validation for some efficiency claims.[7]

Inference Speed Benchmarks

ModelMetricSambaNova ClaimContext
DeepSeek R1 671BTokens/sec[16]198-255 tok/sFull model (not distilled), 16 SN40L RDUs. "World record" per SambaNova.
Llama 3.1 405BTokens/sec[17]132 tok/sFirst to claim real-time inference on full 405B model.
Llama 3.3 70BTokens/sec500+ tok/sPer SambaCloud benchmarks.
CoE RoutingModel switch time[13]Microseconds15-31x faster than GPU-based model loading.

Energy Efficiency Claims

MetricSambaNova vs. NVIDIA H200Source
Intelligence per Joule4x better than Blackwell[7]Stanford HAI benchmark (third-party)
Low Batch Performance9x faster, 5.6x more efficient[7]SambaNova self-reported vs. H200
High Batch Performance4x faster, 2.5x more efficient[7]SambaNova self-reported vs. H200
Chips Required (DeepSeek R1)16 RDUs vs. 300+ GPUs[16]95% fewer chips claim
Power per Rack~10 kW vs. 40-120 kW[15]Air-cooled vs. liquid-cooled GPU racks
Benchmark Caveats
  • Self-reported numbers dominate. Most benchmarks are published by SambaNova, not independent third parties. Stanford HAI's "Intelligence per Joule" metric is the notable exception.
  • Batch size matters. SambaNova's advantages are most pronounced at low batch sizes (single-user inference). At high batch sizes (high-throughput serving), GPUs narrow the gap significantly.
  • Token/sec vs. total throughput. Per-prompt speed is impressive, but total tokens served per dollar per hour (the metric that matters for inference-as-a-service economics) is less clear.
  • Model coverage is limited. SambaNova benchmarks focus on a handful of large models. Broad model coverage (hundreds of models) is not demonstrated.

Performance vs. Other Custom Silicon

CompanyChipArchitectureKey ClaimStatus
SambaNovaSN40L RDUDataflow198 tok/s DeepSeek R1 671BShipping
CerebrasWSE-3Wafer-scale1,800 tok/s Llama 3.1 70BShipping
GroqLPUTSP dataflow~500 tok/s Llama 70BShipping
EtchedSohuASIC (transformer-only)500K tok/s Llama 70B (claimed)Pre-production
NVIDIAB200GPUBroad ecosystem, dominantShipping
Page 7 of 10

Customer Analysis and Go-to-Market

SambaNova's customer base is concentrated in government/national labs and a small number of enterprise accounts. The company has struggled to achieve broad commercial adoption, which is a pattern common to custom silicon vendors.

Key Customer Deployments

CustomerSectorDeploymentUse Case
Argonne National Lab[18]DOE / GovernmentSN40L inference cluster (16 RDUs) + legacy SN30 training systemAuroraGPT foundation model for scientific research (biology, chemistry, materials, climate)
Los Alamos National Lab[19]DOE / GovernmentExpanded DataScale + SambaNova SuiteGenerative AI and LLM workloads for national security research
Lawrence Livermore National LabDOE / GovernmentDataScale systemAI-driven scientific computing
Enterprise customersVariousUndisclosedSambaCloud API users (unknown volume)
Customer Concentration Risk

SambaNova's disclosed customers are overwhelmingly U.S. Department of Energy national laboratories. While these are prestigious deployments that validate technical capability, they represent a narrow, grant-funded customer base with limited commercial scalability. The company has not publicly disclosed any Fortune 500 enterprise customers or recurring revenue contracts. This government-lab concentration is a structural weakness.

Go-to-Market Evolution

Phase 1 (2018-2023): Hardware Sales Model

SambaNova sold DataScale appliances directly to customers for $1M+ per system. This required long sales cycles, custom integration, and significant professional services. The model generated lumpy, non-recurring revenue.

Phase 2 (2024-2025): Cloud + Managed Services Pivot

Recognizing the limitations of hardware sales, SambaNova launched SambaCloud and SambaManaged to shift toward recurring revenue. SambaCloud offers free-tier access to attract developers (20 requests/min, 50/day), while SambaManaged targets enterprise data center operators with a 90-day deployment promise.[12]

SambaCloud Developer Funnel

TierAccessRate LimitsCredits
FreeAPI access, all models20 req/min, 50 req/dayNone
DeveloperAPI access, all models1,000 req/day (with $10 topup)$5 starting + $50 bonus
EnterpriseCustom SLAs, dedicatedCustomCustom pricing

Cloud API Pricing (SambaCloud)

ModelInput (per 1M tokens)Output (per 1M tokens)
GPT-OSS 120B$0.22$0.59
Llama 3.1 405B[20]$5.00$10.00
DeepSeek R1 671BEnterprise / Contact SalesEnterprise / Contact Sales
Llama 3.3 70B~$0.60~$0.60
Pricing Analysis

SambaCloud's pricing is competitive but not disruptive. The free tier is useful for developer attraction but the 50 requests/day limit is restrictive. Enterprise pricing is opaque ("Contact Sales"), which signals that SambaNova is still figuring out its unit economics. Compare with The platform's potential to offer transparent, per-token pricing with guaranteed SLAs.

Page 8 of 10

Competitive Positioning: Custom Silicon Landscape

SambaNova operates in the custom AI silicon market alongside Cerebras, Groq, Etched, and others. All face the same existential challenge: convincing customers to adopt proprietary hardware in a market where NVIDIA's CUDA ecosystem is the de facto standard.

Custom Silicon Competitive Matrix

DimensionSambaNovaCerebrasGroqEtched
Valuation~$1.6B (implied)[4]$7.6B (pre-IPO)$2.8B$3.3B
Chip Gen4th (SN40L)3rd (WSE-3)2nd (LPU)1st (Sohu)
FocusInference + CoETraining + InferenceInference speedTransformer inference
CoolingAir-cooledWater-cooledAir-cooledTBD
RevenueUndisclosed$136M (FY2024)UndisclosedPre-revenue
Key RiskValuation collapseCustomer concentrationLimited model supportUnproven hardware
Moat3-tier memory, CoEWafer-scale, trainingLatency speedTransformer ASIC

Why Custom Silicon Struggles

SambaNova's trajectory illustrates the fundamental challenges facing all custom AI silicon companies:

  1. CUDA ecosystem lock-in. NVIDIA's software ecosystem (CUDA, cuDNN, TensorRT, Triton) has 20+ years of developer tooling. Rewriting models for RDU/LPU/WSE requires significant engineering effort.
  2. Customer risk aversion. Enterprises fear being locked into a proprietary hardware vendor that may not survive. SambaNova's valuation decline validates this concern.
  3. Rapid GPU improvement. Each NVIDIA generation (H100 to H200 to B200 to GB200) closes the performance gap with custom silicon, reducing the incentive to switch.
  4. Total cost of ownership. Even if chip performance is superior, the total cost of migrating software stacks, retraining teams, and managing a non-standard platform often exceeds hardware savings.
  5. Funding requirements. Custom silicon requires $500M-$1B+ in R&D before generating meaningful revenue. SambaNova has burned through $1.14B+ and is still raising.
The platform's Strategic Advantage

The platform's GPU-agnostic, multi-chip inference platform is positioned to benefit from these dynamics rather than suffer from them. By abstracting the hardware layer and offering customers a single API that works across NVIDIA GPUs, and alternative silicon accelerators, the platform eliminates the vendor lock-in fear that constrains every custom silicon vendor. The platform can route workloads to the most cost-effective hardware for each query, delivering the performance benefits of custom silicon without the risk.

Threat Assessment Strategic Implications

Threat VectorLevelRationale
Direct competition (inference APIs)LOWSambaCloud has minimal commercial traction. Developer community is small.
Enterprise deals (SambaManaged)MEDIUMSambaManaged's 90-day deployment promise could appeal to sovereign/government customers.
Technology disruptionLOWRDU is impressive but niche. No evidence of broad model coverage or ecosystem adoption.
Talent acquisitionOPPORTUNITYPost-layoff engineers with dataflow and compiler expertise are available for recruiting.
Partnership opportunityOPPORTUNITYSambaNova hardware could be integrated into A multi-chip inference platform.
Page 9 of 10

Strategic Implications

Scenario Planning: What Happens to SambaNova?

Scenario A: Vista/Intel Round Closes Successfully (Most Likely, ~60%)

SambaNova raises $350M+ and continues as an independent company focused on inference. They become a niche player serving government labs and select enterprises. Strategic implication: Evaluate SambaNova as a chip supplier for A multi-chip inference stack. Their air-cooled, energy-efficient hardware could reduce The platform's cost-per-token for specific workloads. Negotiate favorable supply terms while SambaNova needs commercial customers.

Scenario B: Intel Acquires SambaNova (~25%)

Acquisition talks resume and Intel buys SambaNova for $1.5-2B. RDU technology gets absorbed into Intel's data center chip portfolio. Strategic implication: Monitor closely. If Intel integrates RDU into its Gaudi/Xeon ecosystem, the platform may get access to dataflow inference through Intel's broader product line. Intel as a chip supplier is more reliable than standalone SambaNova.

Scenario C: SambaNova Fails / Fire Sale (~15%)

Funding round fails or company runs out of runway. IP and talent become available at distressed prices. Strategic implication: Prepare an acqui-hire target list of 10-15 SambaNova engineers specializing in dataflow compilation, inference optimization, and chip-software co-design. Budget $2-5M for targeted recruitment. The RDU IP itself is likely too expensive to acquire directly.

Actionable Recommendations

Near-Term (Q1-Q2 2026)

  1. Initiate hardware evaluation. Request a SambaRack SN40L-16 evaluation unit. Benchmark against H100/H200 and alternative silicon on target workloads. Focus on tokens/sec/dollar and tokens/sec/watt metrics.
  2. Begin selective recruiting. Target 3-5 SambaNova engineers who were part of the April 2025 layoffs. Focus on compiler team and inference optimization team. These skills are directly applicable to A multi-chip abstraction layer.
  3. Customer intelligence. Engage Argonne and Los Alamos National Labs to understand their SambaNova experience. These are potential the platform customers for sovereign inference.

Medium-Term (Q3-Q4 2026)

  1. Negotiate chip supply agreement. If evaluation is positive, negotiate a supply agreement for SambaRack units. SambaNova's need for commercial customers gives the platform significant negotiating leverage.
  2. Build multi-chip routing intelligence. Develop the workload routing layer that can dynamically assign inference requests to NVIDIA or alternative silicon hardware based on model size, latency requirements, and cost targets.
  3. Use SambaNova case study in sales. Their struggles validate The platform's pitch: "Don't bet on a single chip vendor. Let us optimize across all of them."

Platform vs. SambaNova: Strategic Comparison

SambaNova's Approach

  • Build custom silicon (RDU)
  • Vertically integrated: chip + software + cloud
  • Single hardware vendor (proprietary)
  • $1B+ R&D investment required
  • Revenue dependent on chip sales
  • Customer lock-in to SambaNova ecosystem
  • Air-cooled advantage but narrow model support

The platform's Approach (the inference platform)

  • Hardware-agnostic inference platform
  • Software layer on top of multiple chips
  • Multi-vendor: multi-chip
  • Lower R&D cost (software, not silicon)
  • Revenue from inference-as-a-service
  • Customer freedom, zero hardware lock-in
  • Air-cooled infrastructure, ultra-low-latency target
The Core Insight

SambaNova spent $1.14B trying to build both the chip and the platform. The platform can achieve better outcomes by building only the platform and leveraging multiple chip vendors (including potentially SambaNova) for the hardware layer. This is architecturally analogous to how AWS succeeded by abstracting hardware, while hardware-only companies struggled. SambaNova's distress creates an opportunity for the platform to access world-class inference silicon at favorable terms.

Page 10 of 10

Risk Factors and Final Assessment

SambaNova's Key Risk Factors

RiskSeverityDescription
Valuation Death SpiralHIGH68% decline from peak. Down rounds erode employee retention and customer confidence.
Revenue OpacityHIGHNo disclosed revenue after $1.14B raised. Likely burning $100M+/year on chip R&D and operations.
Customer ConcentrationHIGHVisible customers are almost exclusively DOE national labs. Limited commercial enterprise traction.
Governance ConflictMEDIUMIntel CEO as Executive Chairman creates acquisition conflicts and regulatory scrutiny.[8]
Talent RetentionMEDIUM15% layoffs + valuation decline make it difficult to retain top engineers. Stock options likely underwater.
NVIDIA RoadmapMEDIUMEach NVIDIA generation (Blackwell, Rubin) narrows the performance gap with custom silicon.
Model EcosystemMEDIUMLimited to open-source models (Llama, DeepSeek, Qwen). No proprietary model partnerships disclosed.
Manufacturing RiskLOWTSMC 5nm is mature and reliable. No known manufacturing issues.

SambaNova's Key Strengths (To Monitor)

StrengthRelevance to the platform
Energy efficiency (4x vs. Blackwell per Stanford)Directly relevant to The platform's cost optimization. Lower power = lower inference cost.
Air-cooled deploymentAligns with The platform's air-cooled infrastructure strategy.
Three-tier memory for large modelsEnables running 671B models on 16 chips. the platform could leverage for large-model workloads.
Composition of Experts (CoE)Novel routing architecture that could inform The platform's multi-model inference routing.
Government/lab relationshipsThe platform could partner or compete for sovereign inference contracts.
World-class technical teamRecruiting opportunity, especially post-layoff engineers.

Final Threat Assessment

Overall Threat Level: LOW

SambaNova is not a competitive threat to the platform. It is a potential chip supplier and talent pool. The company's trajectory from $5B peak to $1.6B Intel offer validates every assumption underlying The platform's GPU-agnostic inference platform strategy:

  1. Custom silicon alone is not a business. You need the software platform, the go-to-market, and the customer relationships. SambaNova has the chip but lacks the rest.
  2. Enterprises reject single-vendor lock-in. SambaNova's struggle to win commercial customers proves that enterprises want optionality, which is exactly what The platform provides.
  3. The real value is in the inference platform, not the hardware. A multi-chip abstraction layer can deliver SambaNova-level performance benefits without SambaNova-level risk.

Bottom line: Watch SambaNova as a potential partner, recruit from their talent pool, and use their cautionary tale in every customer conversation about why The platform's approach is superior.

Sources and References

  1. [1] University of Michigan CSE, "SambaNova, founded by alumnus Kunle Olukotun, emerges from stealth mode," cse.engin.umich.edu
  2. [2] SambaNova Systems, "SN40L RDU: Next-Gen AI Chip for Inference at Scale," sambanova.ai/products/sn40l-rdu-ai-chip
  3. [3] Tracxn, "SambaNova Systems: Funding Rounds & List of Investors," tracxn.com
  4. [4] EE Times, "Intel Eyeing AI Catchup in Inference with SambaNova Acquisition," Dec 2025, eetimes.com
  5. [5] Reuters via Yahoo Finance, "Vista Equity Partners and Intel to lead investment in AI chip startup SambaNova," Feb 2026, finance.yahoo.com
  6. [6] Data Center Dynamics, "SambaNova lays off 77 employees as company pivots from training to inference," Apr 2025, datacenterdynamics.com
  7. [7] SambaNova Systems, "Intelligence per Joule: The New Metric for True AI Value and Efficiency," sambanova.ai/blog/best-intelligence-per-joule
  8. [8] TrendForce, "Intel Reportedly Eyes AI Startup SambaNova, Where Lip-Bu Tan Serves as Executive Chairman," Oct 2025, trendforce.com
  9. [9] Clay, "Who is the CEO of SambaNova Systems? Rodrigo Liang's Bio," clay.com
  10. [10] SambaNova Systems, "$150M Raised for Series B Funding," sambanova.ai/press
  11. [11] SambaNova Systems, "SambaCloud: Full-Stack AI Platform for Large Open-Source Models," sambanova.ai/products/sambacloud
  12. [12] SambaNova Systems, "Introducing SambaManaged: A Turnkey Path to AI for Data Centers," Jul 2025, sambanova.ai/blog
  13. [13] VentureBeat, "SambaNova debuts 1 trillion parameter Composition of Experts model for enterprise gen AI," venturebeat.com
  14. [14] arXiv, "SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts," arxiv.org/html/2405.07518v1
  15. [15] ServeTheHome, "SambaNova SN40L RDU for Trillion Parameter AI Models," servethehome.com
  16. [16] TechRadar, "SambaNova hits 198 tokens per second on full DeepSeek-R1 671B with only 16 SN40L RDU chips," techradar.com
  17. [17] HostingJournalist, "SambaNova AI Platform Runs Llama 3.1 405B at 132 Tokens/Second," hostingjournalist.com
  18. [18] BusinessWire, "Argonne National Laboratory Deploys a New SambaNova Inference-Optimized Cluster," Nov 2024, businesswire.com
  19. [19] Data Center Dynamics, "Los Alamos National Laboratory expands SambaNova deployment," datacenterdynamics.com
  20. [20] eesel AI, "Understanding SambaNova Cloud pricing in 2025: A complete guide," eesel.ai/blog/sambanova-cloud-pricing
  21. [21] SambaNova Systems, "SambaNova 2.0: Build with Relentless Intelligence," sambanova.ai/blog
  22. [22] Medium (Frank Wang), "Comparing AI Hardware Architectures: SambaNova, Groq, Cerebras vs. Nvidia GPUs & Broadcom ASICs," medium.com
  23. [23] Bloomberg, "SambaNova Seeks Up to $500M Funding After Intel Takeover Talks Stall," Jan 2026, bloomberg.com
  24. [24] SiliconANGLE, "Report: SambaNova is raising $350M+ from Intel-backed consortium," Feb 2026, siliconangle.com
  25. [25] IEEE Xplore, "SambaNova SN40L: A 5nm 2.5D Dataflow Accelerator with Three Memory Tiers for Trillion Parameter AI," ieeexplore.ieee.org