The depreciation clock CoreWeave’s bulls aren’t running
A US$66 billion to US$88 billion backlog. A fleet depreciated over six to eight years. The hardware economics may not stretch that far.
The bull case on CoreWeave is not stupid. It is structurally coherent. The company secured early access to NVIDIA GPU supply, converted that access into multi-year reserved-capacity contracts with the largest AI buyers in the world, and is now sitting on a contracted backlog that various published accounts place between US$66 billion and US$88 billion. Customers are signing three- to five-year reservations because they are worried the capacity won’t be available later.
If that is the story, CoreWeave is what its most enthusiastic backers say it is: a foundational compute layer for the AI economy, collecting predictable cash flows on contracted infrastructure.
But the practitioner evidence points to a different underwriting question. A utility’s economics are supposed to survive normalisation. A scarcity trade has to prove that what began as access becomes advantage. And buried inside the CoreWeave bull case is an accounting assumption that nobody seems to be stress-testing: former practitioners describe CoreWeave and comparable cloud infrastructure operators as underwriting GPU fleets on six- to eight-year depreciation assumptions. Former insiders say the hardware physically degrades in three to five under heavy load. Several practitioners describe reserved CoreWeave clusters in their portfolios running at 92 to 95 per cent utilisation.
That gap is not a rounding error. It is the difference between the margin profile the market is pricing and the one that is actually running.
What has to be true for the bull case to work
That matters, because a lazy bear case misses something important.
CoreWeave’s technology advantage is not just GPU access. A former engineer — present at the company’s founding, involved in the IPO process, and who exited in May 2025 — describes a bare-metal Kubernetes architecture that delivered materially better AI workload performance than general-purpose hyperscaler infrastructure, including claims of 8 to 10 times faster performance in selected workloads versus AWS. The former engineer argues that hyperscaler virtualisation layers can impose meaningful overhead, in some cases estimated at 20 to 30 per cent of GPU compute. CoreWeave runs bare metal, giving customers direct hardware access. That advantage doesn’t disappear when GPU supply normalises — it may actually become more visible, because in a supply-normalised market customers compare on performance rather than availability.
The inference transition changes the demand character of the business. Training demand is comparatively episodic and project-based. Inference demand is continuous: every user query, every agent action, every API call is a recurring GPU consumption event. Across practitioner interviews, partners report inference rising from 60 to 70 per cent of CoreWeave deployments year-over-year. One Southeast US consulting executive — overseeing several million dollars annually in CoreWeave spend for manufacturing, media and healthcare clients — describes his portfolio’s inference mix going from 22 per cent to 62 per cent in four-week deployment cycles, year-over-year, alongside 124 per cent year-over-year revenue growth in Q1 2026. Once an autonomous agent is embedded in a client’s diagnostic or manufacturing workflow, the underlying compute becomes operationally invisible. It doesn’t get turned off to save budget.
The Weights & Biases acquisition and nascent software licensing ambitions sketch a path toward margin that is structurally decoupled from GPU supply dynamics. If CoreWeave can charge for orchestration, fleet management and inference optimisation as a software product — at the roughly 20 to 25 per cent margin one practitioner estimates — the economics change materially.
So: real technology advantage, real inference stickiness, real contracted revenue base. The scarcity was the genesis. It is not necessarily the ceiling.
What the practitioner evidence actually shows
Across 11 practitioner interviews conducted between 22 April and 1 May 2026 — covering a DigitalOcean account executive competing with CoreWeave in EMEA, the former CoreWeave engineer, a Nebius partnership director benchmarking competitors, technology consulting partners deploying CoreWeave for enterprise clients, and an AWS senior manager analysing the neocloud model — a consistent description emerges that sits alongside the growth story rather than replacing it.
The DigitalOcean AE puts it cleanly: “There is no differentiator. It’s more a commodity. ‘Hey, you have more stuff. I’m coming to you.’” Customers maintain relationships with three to five providers simultaneously. They reevaluate monthly. For many customers, the first reason to use CoreWeave is availability: it has GPUs when hyperscalers don’t. The product advantage may be real, but availability is doing a large amount of the commercial work.
The consulting channel confirms the dynamic. CoreWeave wins against AWS and Azure primarily because it can provision in six weeks against a five-month hyperscaler lead time. One partner describes that lead-time gap as “effectively closing the deal.” AWS, Google Cloud and Microsoft Azure are each increasing AI infrastructure capex aggressively, with combined announced or guided spend across 2025 and 2026 running into the hundreds of billions of dollars. When their backlogs clear, CoreWeave’s provisioning advantage compresses.
Customer concentration is consistent across interviews. The AWS senior manager estimates roughly two-thirds of CoreWeave’s revenue tied to one or two large providers. Multiple consultants reference the same fragility: if Microsoft — which reportedly considered walking away from selected CoreWeave data center commitments in 2024 — reduces its commitment at renewal, the downstream revenue impact is immediate and large. The backlog is the moat. It is also a moat with very few gates.
The largest AI buyers — Meta, Microsoft and OpenAI — have used their demand scale to rent flexibility. CoreWeave is selling that optionality while retaining the residual hardware exposure.
The depreciation clock
This is the part of the CoreWeave thesis that has received the least public scrutiny and, based on the interviews reviewed here, carries the most specific operational risk.
The former engineer’s explanation starts with physics: heat cycling. A GPU running at 80 to 100 per cent of capacity continuously generates heat, accumulates wear on fans, wiring and connectors, and degrades at a rate that a machine running at 20 to 30 per cent does not. “Wear and tear will cause it to slow down,” he says. His estimate: three to five years of useful life, possibly closer to three for hardware running at maximum load.
Former practitioners describe the industry — CoreWeave included — as underwriting GPU fleets on six- to eight-year depreciation assumptions. Apply the arithmetic. At US$30,000 per H100 unit — an illustrative figure, not sourced from CoreWeave’s filings — seven-year depreciation gives roughly US$4,300 per GPU annually. Three-year depreciation gives US$10,000. Across a fleet valued at tens of billions of dollars, that is not a marginal difference. The former engineer is direct about the consequence: under three-year depreciation, margins get “tight” and could “break even in the nick of a profit” — or tip negative.
But the risk is more than wear and tear. The issue is not simply that GPUs “break” after three years. They probably don’t, at least not all at once. The issue is whether a three- or four-year-old fleet, run at extremely high utilisation, can still command pricing that justifies a six- to eight-year capital recovery assumption. Physical degradation, customer refresh expectations and second-hand collateral value are separate risks, but they all point in the same direction: the economic useful life may be shorter than the accounting life. Revenue-generating life, residual value and accounting life may diverge — and in a contract structure where the hardware serves as collateral for debt facilities, that divergence has a direct balance sheet consequence.
The market has started to notice the accounting issue — Michael Burry has made a version of this argument publicly — but the practitioner interviews add operational texture from someone who was inside the model.
Why these risks don’t arrive sequentially
The dangerous part is not that CoreWeave depends on three assumptions. Infrastructure businesses always depend on assumptions. The dangerous part is that these three assumptions are likely to fail together.
GPU supply normalises. Rental pricing compresses. The largest buyers regain negotiating leverage at renewal. Residual fleet value starts to matter. Depreciation assumptions get tested precisely when the business can least absorb the adjustment.
These aren’t three independent risks to be probability-weighted separately. They are a chain. The scenario that weakens one tends to trigger the next. The former engineer names OpenAI as the pivotal case — if OpenAI begins questioning the economics of its CoreWeave commitments, the depreciation question moves from academic to renegotiation trigger. And OpenAI, unlike Meta or Microsoft, is not a fortress balance sheet. It is a company with its own cash burn and its own strategic incentives to internalise infrastructure over time.
Who owns the duration risk
The asymmetry is structural and worth stating plainly.
The customers with the most strategic leverage have used that leverage to rent flexibility. They get the compute capacity they need, keep the capital obligation off their primary books, retain the renewal decision at contract expiry, and pass the long-dated residual exposure down the stack. CoreWeave is being paid to warehouse duration risk, technology obsolescence risk and rollover risk that its largest customers have consciously chosen not to hold.
This is not a criticism of the model. It is a description of the risk allocation. The question is whether CoreWeave is being compensated adequately for warehousing that exposure — and whether the depreciation accounting correctly captures what it is actually warehousing.
In the stress scenario, the chain is short. GPU prices normalise, a major counterparty reduces renewal commitment, the revenue shortfall hits CoreWeave’s debt service, and the GPU fleet serving as collateral is worth less than the accounting implies because economic useful life is shorter than accounting life. The optionality travelled up the chain. The downside risk travels down.
My read
I wouldn’t short CoreWeave on this alone. It is a real business with real contracts and real technology. The inference stickiness story is genuinely differentiated from the training-era narrative, and the bare-metal architecture advantage is not a scarcity artefact.
But I also wouldn’t underwrite the utility narrative at current valuations without seeing specific answers to a specific question: what do the unit economics look like under three simultaneous resets — a 40 per cent normalisation in GPU rental pricing, three-year economic depreciation applied to the fleet, and a Microsoft renewal at 80 per cent of current commitment?
That number exists somewhere. It should be the first slide at the next investment committee, not the scenario analysis in the appendix.
The clock is running. That’s the trade.
Sources: This piece draws on 11 practitioner expert interviews conducted between 22 April and 1 May 2026, covering channel partners, technology consultants, a former CoreWeave operator and competitors with direct involvement in CoreWeave deployments. Interview claims are presented as practitioner observations, not confirmed facts. Where multiple independent sources converge, this is noted. Where sources diverge — pricing estimates range from 6 to 8 per cent to 40 to 50 per cent year-over-year depending on contract vintage and customer tier — the divergence is named rather than averaged. CoreWeave’s backlog figures are drawn from published reporting and practitioner accounts; the company has not confirmed a single authoritative number. The depreciation arithmetic is illustrative, not sourced from CoreWeave’s filings.

