Ten Conversations That Explain the current State of Play in Power Markets
This collection of notes explores the “time-to-power” bottleneck as it occurs in real-world settings. First, it provides a broad reality check, including firm power and fuel logistics. Then, it examines how hyperscalers choose and secure sites like powered land, substations, and water sources. Next, it discusses their methods of financing and risk mitigation during construction through procurement, phased delivery, incentives, and community engagement. Following this, it considers key supply-side constraints such as EPC execution capacity, behind-the-meter (BTM) pathways, and the ongoing importance of IPPs. Finally, it evaluates the “optional” levers and long-term strategies, such as VPP flexibility, nuclear partnerships, and the vulnerability of renewables platforms.
10 key data points:
Powered land pricing: powered sites with a credible path to power (late-decade delivery) trading around $400k–$500k per acre (broker perspective).
Substation premium on land: having an on-site substation can push powered-land pricing to roughly $700k–$800k per acre.
Greenfield substation scarcity: <10% of greenfield plots already have substations (i.e., most “powered land” is really “substation-to-be-built land”).
Pipeline scale vs deliverability gap: a broad pipeline described as 100+ GW of prospective data center capacity, but only a much smaller slice credibly deliverable near-term (the “paper MW vs real MW” gap).
Near-term deliverability estimate: a practitioner/broker view that ~10–15 GW might be able to come online by ~2027 (framed as aggressive / “obscene” scale relative to infrastructure reality).
Water as a binding constraint: some AI/high-density configurations described as needing ~2–4 million gallons/day of water (cooling-driven), making water availability co-equal with power in site selection.
Thermal EPC margin regime shift: thermal EPC margins discussed as having moved from historical mid-to-high teens into the ~20% range (with upside as contractors price schedule risk/scarcity).
EPC execution bandwidth ceiling: one EPC leader’s practical limit of ~4 concurrent complex CCGT projects before leadership/quality dilution becomes a schedule risk.
Contracting structure shift: observable movement away from strict fixed-price turnkey EPC (with hard LDs) toward cost-plus / T&M / more owner risk because supply chain + schedule risk is difficult to underwrite.
VPP scaling bottleneck: repeated emphasis that VPP scale is gated less by dispatch algorithms and more by meter-data access + M&V/settlement plumbing (with some program rules limiting value capture, e.g., baseline constraints like “can’t monetize below zero”).
List of discussions:
Macro reality / firm power urgency (oilfield-services → consultant)
Hyperscaler development + powered land (AWS/Mace → Google)
Powered land market + substations/water pricing (AT&T → Telx/DRT → Equinix → broker)
Hyperscaler finance / de-risking (Amazon/AWS → Meta)
Supply-side execution: EPC capacity ceiling (Gemma → Kindle)
BTM gas as default pathway (broker view)
Why hyperscalers partner with IPPs (NRG)
VPPs as flexibility, constrained by meter data (Sunnova → SunStrong)
Nuclear as a hedge (Meta policy → Kekst)
Solar platform distress / capital structure (GE Ventures / BlueMountain → XIP)
Conversation 1 — NEE / gas & nuclear “urgency”: the operator’s critique (Person C: formerly Schlumberger and Weatherford; also EcoVapor)
This call had the tone of someone who has seen multiple cycles and is no longer impressed by slogans. I spoke with Person C (an executive oil-and-gas consultant; previously in VP roles at Schlumberger; later VP global account roles at Weatherford; also a performance advisor at EcoVapor) about what data-center-driven demand growth means for the generation mix—and what the energy sector is getting wrong.
Person C’s thesis was simple and confrontational: utilities and policymakers are underestimating the speed and firmness of incremental load from AI/data centers, while simultaneously overestimating what intermittent renewables can do without massive firming and infrastructure investment. This wasn’t a generic “renewables bad” rant. It was a timing argument and a reliability argument. In their view, the system can absolutely decarbonize over time—but if the load arrives faster than the grid can build, the market reaches for what can be delivered.
We spent time on natural gas infrastructure and LNG because Person C treats molecules as the constraint that sneaks up on everyone. Their view is that the U.S. risks losing global market share if it doesn’t accelerate LNG buildout materially by decade-end. The precise number matters less than the framing: midstream capacity and the gas supply chain (pipelines, compression, storage, contracting) are gating items for both domestic power reliability and global energy positioning. If you want to run a “reliability economy,” you can’t do it on vibes. You need pipes, steel, crews, permits, and years of coordination.
On the generation mix, Person C argued that gas and nuclear are the only realistic near- to mid-term firm solutions at scale if the goal is predictable, dispatchable output. They see the sector drifting into a collision between (i) hyperscalers with urgent timelines and giant checkbooks and (ii) utility planning cycles that are inherently slower and governed by regulatory prudence standards. Their critique isn’t that utilities are irrational; it’s that the institutional design of regulated utility planning is not optimized for “AI race” urgency.
That led into the competitive dynamic: Person C expects hyperscalers to invest more directly in power solutions—through partnerships, dedicated build-to-suit plants, or other structures—because the opportunity cost of waiting is existential in an AI race. In that world, the traditional utility model of “we’ll study it, then we’ll plan it” starts to look like a liability. The market doesn’t wait politely for IRP cycles.
One pragmatic recommendation was geographic: build data centers close to gas sources to reduce infrastructure costs and compress timelines. This isn’t a romantic vision of place-based development; it’s a supply chain calculation. The farther a site is from firm fuel and a credible interconnection path, the more “miracles” you need to happen on schedule. Person C’s view is that the winners will be those who minimize contested dependencies.
We also talked about the investment lens. Person C said they’d rather invest in the AI hardware ecosystem than utilities or energy producers—not because utilities are doomed, but because utilities may be structurally slow to monetize upside under regulated constraints. If load growth is explosive, the companies with pricing power and faster cycle times can capture more of the value.
Further notes (where I agree and where I’d push back): Person C’s “firm power or bust” framing is directionally consistent with what you hear from large-load developers under schedule pressure. The market’s decision function often becomes: “what can we build and interconnect fastest, with controllable output?” That is why gas is having a resurgence in large-load conversations.
But the more interesting implication isn’t simply “gas demand rises.” It’s that the value chain shifts upstream into midstream and equipment. If big load growth accelerates, you get competition for molecules between LNG, power, and industry—elevating the importance of basis risk, pipeline bottlenecks, storage, and contracting structures. Person C’s “put the data centers next to the gas” heuristic is really a supply-chain thesis: minimize contested infrastructure.
We also spoke about policy risk. Person C expects political whiplash on clean-energy incentives and argued that capital should not underwrite projects that only work if subsidies remain pristine. The broader lesson is reasonable: price policy risk explicitly.
On nuclear, Person C was more focused on end state than execution pathway. They see nuclear as one of the few ways to get large blocks of firm low-carbon power, but they didn’t dwell on FOAK risk, licensing timelines, supply chains, or construction execution. I’d translate that into a portfolio logic: nuclear partnerships are a hedge against a future where (a) gas becomes constrained or expensive and (b) grid expansion remains slow.
The bottom line: if hyperscalers treat power as a strategic input, they will behave like strategic buyers—pay to de-risk timelines, diversify supply, and accept unconventional geographies. That’s the bridge between Person C’s worldview and what you see in actual site selection today.
Conversation 2 — GOOGL / hyperscaler development in EMEA: the powered-land premium (Person D: formerly AWS and Mace; moving to Google)
I spoke with Person D (formerly Senior Technical Infrastructure Program Manager – EMEA at Amazon Web Services; previously a Project Director at Mace; transitioning into a Google role) about how hyperscalers actually develop data centers in EMEA and why power availability has become the defining bottleneck.
Person D framed the development process as a multi-team relay: site selection, grid connectivity workstreams, design finalization, contracting, and execution. It’s not that any single step is impossible; it’s that each step has its own queue, stakeholders, and failure modes. One missed handoff and you don’t slip a week—you slip a season.
The core message was simple: in many European markets, power availability is the constraint, not land. “Powered land” is a strategic asset because it bundles the hardest-to-secure elements—interconnection and capacity—into a tradable asset. That creates a premium market for sites where high-voltage infrastructure is already in place or plausibly deliverable.
Person D emphasized that time-to-power is often determined by long-lead equipment and utility process, not by how fast you can pour concrete. If you already have major equipment (especially high-voltage transformers) effectively on-site—or at least contracted into the schedule—you can compress the critical path. If you don’t, your schedule is mostly a forecast of other people’s lead times.
We discussed brownfield conversions, which are increasingly attractive because the “infrastructure bones” can be better: grid proximity, permitting histories, sometimes existing substations. But brownfield comes with its own headaches: legacy constraints, environmental issues, and mismatch between existing infrastructure and modern AI density requirements. The implied tradeoff is predictable: you can buy time, but you inherit constraints you didn’t choose.
A particularly useful lens from Person D was design stability as a schedule weapon. In high-density AI builds, designs keep evolving—cooling architectures, electrical redundancy assumptions, rack power density. Person D argued that winners are teams who can lock a design enough to procure long-lead items without constantly re-opening the engineering package. Your schedule is a function of your ability to decide and then live with the decision.
We also discussed how European permitting and community engagement can be as constraining as interconnection. Pre-entitled or grid-proximate sites become valuable because they reduce unbounded risk: fewer unknowns, fewer stakeholders you need to convince, fewer “surprise” constraints.
Person D highlighted an emerging tension: hyperscalers want optionality (multiple sites, multiple paths to power) while utilities and grid operators want commitment (so they can plan investments). That mismatch can slow development unless new commercial constructs emerge—such as capacity reservations, staged commitments, or clearer pathways to “hold” interconnection rights without fully committing to build.
Further notes (why this is bigger than EMEA): The EMEA story is a preview of what happens when grid scarcity becomes structural. When power is scarce, the market starts to price “certainty” explicitly: sites with credible interconnection and equipment pathways command premiums because they contain time.
The other underappreciated point is that hyperscalers have to run a portfolio of uncertainty. A single site doesn’t need to be perfect; it needs to be one of several credible options. In that context, “powered land” becomes less a real estate concept and more a risk management asset.
The takeaway: if you want to understand hyperscaler growth in EMEA, stop thinking like a property investor and start thinking like a grid planner. The scarce inputs are electrons and schedule certainty.
Conversation 3 — CRWV / powered land market: substations, water, and price discovery (Person E: formerly AT&T, Telx/Digital Realty, Equinix; now AP Advisory / Avison Young)
I spoke with Person E (a long-time data center solutions broker: started at AT&T; later involved via the Telx acquisition into Digital Realty; then worked at Equinix; now CEO/Founder of AP Advisory Group and a national solutions broker at Avison Young) about what the powered-land market looks like when you take the marketing gloss off.
Person E opened with a career-flex: selling Google its first data center “to the founders” back in the day. The point wasn’t nostalgia—it was to establish that they’ve watched the industry evolve from “find land” to “find power.” In their view, we’ve crossed a threshold where the limiting factor isn’t acreage—it’s the timeline to deliver credible megawatts.
The conversation quickly turned to pricing, and Person E treated pricing as a translation of time-to-power. They claimed powered land with a plausible path to power in 2028–2029 can trade around $400k–$500k per acre, and that having a substation on-site can move that into the $700k–$800k per acre range. The premium is not mystical; it’s monetization of certainty. Buyers pay for fewer miracles.
Then came the key constraint: substations on greenfield sites are rare. Person E’s estimate was that under 10% of greenfield plots already have substations, and that even when brownfield sites have one, it’s often undersized and requires upgrades. Translation: most of this market is a “build the substation” market, not a “find the land” market.
We talked about greenfield vs brownfield. Person E’s rough split was ~80% greenfield / ~20% brownfield, including crypto conversions. But they were clear that conversions are not a cheat code. A mining site’s electrical infrastructure isn’t automatically suitable for hyperscale AI builds, and the cooling + redundancy assumptions differ.
Person E painted the pipeline as huge—“well north of 100 GW” in some framing—with only a fraction realistically deliverable by 2027. The behavioral takeaway was more important than the headline number: hyperscalers are “deathly afraid” of losing options. In a scarcity market, optionality has value. So buyers want to see everything—every parcel, every path-to-power scenario—because the cost of missing the option is higher than the cost of reviewing it.
The most interesting segment was on the utility negotiation model. Person E suggested utilities may be willing to share substation costs if upgrades benefit broader load—sometimes implying a 50/50 logic. The data center can become a catalyst for grid modernization, but only if the utility can justify the investment to regulators and ratepayers.
Then we got to the second constraint joining power as a first-order variable: water. Person E described some hyperscaler/GPU configurations reaching “two to four million gallons per day,” making water access a real site-selection driver alongside power.
Further notes (why brokers suddenly matter again): Brokers like Person E sit at the intersection of three markets that don’t naturally speak the same language: (i) land and real estate, (ii) utility interconnection and substation buildout, and (iii) hyperscaler demand planning. Their “pricing” essentially serves as a translation layer between those worlds.
We went deeper into what “substation premium” really means. It’s not just “a substation exists.” It’s whether it’s sized, configured, permitted, and expandable enough to meet modern redundancy and power-quality requirements. Many legacy substations are a starting point, not a solution.
We also discussed how the market can look “overbuilt” on paper. If hyperscalers hoard options, the pipeline appears enormous—even if only a slice ever becomes real. This matters for investors: don’t confuse option inventory with delivered capacity.
Finally, the water point is the one that people treat like an afterthought until it isn’t. If cooling strategies remain water-intensive for some high-density builds, then the scarce asset becomes land with both electrons and cooling credibility.
Conversation 4 — CRWV / hyperscaler finance: the real bottlenecks (Person F: formerly Amazon sustainability finance and AWS; also Robinhood)
I spoke with Person F (Head of Data Center Infrastructure & Sustainability Finance at Meta; formerly Director of Worldwide Sustainability Finance at Amazon; previously held finance leadership roles at AWS and served as a Senior Director/CFO for Robinhood Money) about how hyperscalers make infrastructure decisions when constraints are ugly.
Person F described a supply-planning loop that starts with business unit demand forecasts and flows into capacity planning, procurement, and site investment. In a supply-constrained world, finance isn’t just budgeting—it’s triage and orchestration. The role starts to look like a supply-chain command center.
The dominant theme was long lead-time constraints: turbines, UPS systems, transformers, and skilled labor. Person F described proactive tactics like pre-booking equipment and, in remote builds, building on-site labor camps. That’s the part that rarely makes it into the glossy narrative: speed-to-power isn’t a slogan; it’s a procurement and workforce strategy.
We discussed retrofits vs new builds. Crypto-site conversions can be tempting because “power exists,” but the limiting factors are typically cooling design, network connectivity, and scalability. Person F implied that purpose-fit builds—or heavily engineered retrofits—tend to win when you’re targeting high-density AI loads.
Network requirements came up in a nuanced way. Person F separated training workloads (big centralized clusters) from inference workloads (smaller, latency-sensitive deployments). Fiber proximity and reliability matter for inference. Training can tolerate more centralization but demands huge contiguous power blocks.
Community and incentives are now first-order variables. Person F emphasized that community perception, interconnection status, and tax incentives materially shape site selection. Hyperscalers partner with utilities and local governments not because they love politics, but because the alternative is waiting.
Further notes (finance as constraint optimizer): We talked more about pre-buying and capacity reservations. Ordering long-lead equipment early can feel like inventory risk, but the opportunity cost of waiting is often higher. That pushes hyperscalers into industrial megaproject behaviors: reserving factory slots, diversifying suppliers, and using repeatable templates to reduce redesign.
A second underappreciated point is phased buildouts. AI demand doesn’t always require the full gigawatt on day one; it requires a portion fast, then scaling. That turns the problem into sequencing: align early-phase power (more expensive/modular) with later-phase power (cheaper/slower). Finance ends up underwriting option value and staging economics.
We also touched on what makes retrofits hard in practice. Redundancy, power quality, cooling architecture, and fiber can break the deal even if there’s “power.” In short: electrons on paper are not the same as electrons that fit the workload.
The takeaway: hyperscalers increasingly treat power as a strategic resource, which changes capital allocation, partnership structures, and the willingness to pay for certainty.
Conversation 5 — AGX / Argan: the EPC capacity ceiling (Person A: formerly Gemma Power Systems; now Kindle Energy)
Some conversations feel like a spreadsheet wearing a hard hat. This was one of those. I spoke with Person A (formerly President at Gemma Power Systems; now Managing Director of Engineering, Procurement & Construction at Kindle Energy) about the power EPC market—specifically the part of the market that keeps trying to ignore: you can’t “scale” experienced teams like you scale cloud compute.
Person A’s central claim was blunt: demand is real, margins have moved up, but execution capacity is the binding constraint. The market wants to treat EPC like a widget—add headcount, add megawatts. In practice, complex thermal projects are a craft industry with a fragile core team that can only stretch so far before it tears.
We started with contract structures. In Person A’s view, fixed-price, turnkey EPCs with meaningful LDs are becoming rarer. With commodity volatility, supply uncertainty, and schedule risk stacked, contractors push toward cost-plus or time-and-materials structures. The buyer wants certainty. The seller wants survivable risk. Penalties don’t make transformers arrive faster.
That shift matters because it changes how investors interpret backlog. A large backlog isn’t automatically a profit machine if contract terms quietly transfer risk back to owners or embed huge contingencies. Person A described margins moving from historical mid-to-high teens toward ~20% in parts of the market. But that’s a risk premium, not a miracle.
Then we hit the question that matters most: how many big projects can you truly run at once? Person A’s practical ceiling for complex combined-cycle work was around four concurrent projects before you start diluting specialized leadership—construction management, commissioning, controls integration. You can staff with bodies; you can’t staff with judgment.
We talked about owner-led EPC and why it’s rising. If you can’t buy certainty from the EPC market, you try to build internal capability. But the constraint remains: experienced teams and vendor slots are scarce. Owning the headache doesn’t remove it.
Solar EPC came up as a contrast. Person A described solar as commoditized and structurally lower margin. Thermal remains a place where execution discipline commands a premium.
Further notes (why this matters for the AI power buildout): The market is pricing schedule certainty more than capex efficiency. That drives earlier procurement (turbines, transformers, switchgear) and changes who wins bids: credible execution teams beat the lowest price.
We also dug into the labor hierarchy. A strong construction manager can rescue a schedule; a weak one can sink it even with ample craft labor. Commissioning leadership and controls integration are similarly non-fungible. That’s why “capacity” is not a single number—it’s how many core leaders you can field.
A final point: templates and repeat-build strategies reduce variance, but they don’t eliminate the bottleneck. The projects still live or die by the experience of their teams. The scarcity is competence.
Conversation 6 — WILLIAMS / behind-the-meter power: when gas becomes the default (Person E again: AT&T → Telx/Digital Realty → Equinix → AP Advisory / Avison Young)
This conversation framed behind-the-meter (BTM) power as moving from “workaround” to “default architecture.” The speaker was Person E, the same broker/advisor (AT&T → Telx/Digital Realty → Equinix → AP Advisory / Avison Young).
Person E’s thesis was direct: tier-one markets are out of grid capacity, and interconnection timelines are stretching. When hyperscalers care about speed-to-power more than elegance, they will use BTM solutions—primarily natural gas and increasingly fuel cells—because those solutions allow the buyer to control more of the schedule.
We talked about how BTM shows up operationally. It can mean dedicated onsite generation serving the campus through a private wire arrangement; it can mean a co-located plant with contractual offtake; it can mean staged modular deployment where the first block arrives fast and later blocks follow. The common theme is schedule control: the buyer is no longer fully at the mercy of the utility’s upgrade timeline.
Geography shifts in this world. Person E offered a map of attractive states—Texas, Oklahoma, Arkansas, Alabama, Kentucky, Missouri, New Mexico, Utah, and parts of the Mountain West—where site selection is influenced by access to power, buildable sites, and natural gas pipelines. It’s not “cheap land”; it’s “fuel adjacency plus buildability.”
We discussed the vendor ecosystem: pipelines and midstream players (Williams as archetype), thermal developers and IPPs (NRG, Calpine, NextEra mentioned in the broader ecosystem), and fuel-cell providers like Bloom Energy. Person E sees fuel cells as increasingly credible for modular deployments where speed and footprint matter.
A key nuance: BTM does not eliminate grid interaction. Campuses still need interconnection for backup, export possibilities, or reliability redundancy. But BTM can change the scale and timing of what the grid must deliver.
We also talked about what can kill BTM projects. Permitting is one. Emissions and community opposition can become real constraints. Equipment lead times and EPC bottlenecks still apply. And then there’s the subtle killer: fuel price and basis risk. If you’re underwriting a gas-centric solution, you’re underwriting the molecule and the pipe, not just the generator.
Further notes (BTM as a new normal): The reason BTM is becoming normal isn’t ideological; it’s arithmetic. AI demand is time-sensitive. If the grid cannot deliver on timeline, buyers will accept less “perfect” solutions.
BTM gas is attractive because it’s controllable, scalable, and can be paired with different offtake structures. A developer can build a plant and sell power under a long-term agreement, or structure a tolling agreement in which the buyer secures capacity while an operator runs the asset. Depending on jurisdiction, there may also be opportunities to provide grid services.
But the risk stack shifts. You inherit fuel risk, permitting risk, community risk, and equipment/EPC constraints. The advantage is that you have more control over the schedule.
Another under-discussed filter is physical risk. Insurance, water availability, and extreme weather can make some geographies less attractive even with incentives. The “new map” of data center development will reflect not only power and fuel, but also insurability and resilience.
The takeaway: BTM is no longer a niche contingency plan; it’s becoming a core growth pathway when speed-to-power is the dominant objective.
Conversation 7 — TLN / IPP development: why hyperscalers often prefer partners (Person I: NRG Energy)
I spoke with Person I (Senior Director, Project Development at NRG Energy; a 40+ year veteran in energy and IPP development) about supply pathways and why hyperscalers often partner with IPPs rather than becoming one.
Person I emphasized scarcity—equipment, people, and interconnection capacity—and argued that the winning approach is ruthless prioritization: focus on projects that can be built, not projects that look great on a map. Their worldview is shaped by the lived reality of development: paper megawatts are cheap; real megawatts are expensive.
Reliability was the anchor. Person I sees gas-fired generation as the near-term backbone because it is dispatchable and can be developed within the practical constraints of today’s grid. While they claimed to be “technology agnostic” in principle, they also acknowledged that the constraint set pushes you toward what can deliver firm power on a timeline.
We discussed the hyperscaler question: why don’t they just build their own plants? Person I’s answer was complexity and governance. Running an IPP portfolio isn’t just “build a plant.” It’s fuel management, market hedging, outage risk, compliance, market exposure, and the daily work of keeping assets investable. Hyperscalers can learn this, but learning it at scale while also running an AI/cloud business is often a distraction.
That’s where partnering comes in. Person I described build-to-suit plants with contracted offtake, tolling arrangements where the hyperscaler secures capacity while an experienced operator runs the asset, and hybrid models that allow staged expansion. Bankability is the keyword: lenders want predictable cash flows, and developers need clear offtake to commit to equipment and EPC slots.
Interconnection strategy is a real advantage for experienced IPPs. Queue reform, network upgrade negotiations, deliverability studies—these are not glamorous, but they are decisive. Person I implied that hyperscalers new to generation underestimate how much friction and expertise lives in this layer.
We also talked about risk allocation. Hyperscalers want certainty; IPPs want financeability. Contracts are the mechanism that aligns those needs. In today’s world, contracts also serve to justify early procurement and lock schedules.
Further notes (why IPPs remain central): Person I’s framework is consistent with the broader theme across all calls: capital is not the primary constraint. Experienced execution is. IPPs exist because they are built to manage the messy stack of risks—fuel, markets, outages, compliance, and long-lived asset care.
Another underappreciated point is that hyperscalers don’t need to become IPPs to get what they want. They need access to firm power on credible timelines. Partnerships can deliver that while outsourcing operational complexity.
If you zoom out, the market outcome likely looks more like “bespoke” capacity deals: dedicated plants, tolling, structured offtake, and partnership models that are more like industrial supply chain contracting than traditional retail utility service.
The takeaway: the market will build new capacity, but the path runs through experienced IPP developers and through contracts that make projects bankable.
Conversation 8 — NEE / VPPs: the meter-data trap (Person B: formerly Sunnova; now SunStrong; also formerly Darcy Partners and Lux Research)
I spoke with Person B (formerly Senior Business Development Manager, Grid Services at Sunnova; now Director of Strategic Market Development at SunStrong; previously at Darcy Partners and Lux Research) about virtual power plants (VPPs) and the operational grind underneath the hype.
Person B is bullish on VPPs in the abstract: device penetration is growing, flexibility is valuable, and the economic logic that distributed assets can defer wires upgrades is real. But their tone was grounded: VPPs don’t scale because someone says “FERC 2222.” They scale when the data and settlement plumbing works.
The call returned repeatedly to one point: VPP scaling is not primarily a dispatch problem; it’s a data and settlement problem. Many programs require baselines and settlement at the meter level, which implies reliable interval data access, clean device telemetry, and reconciliation of gaps, latency, and utility idiosyncrasies. Person B described M&V as surprisingly manual—lots of edge cases, lots of back-office work, lots of disputes. The unglamorous truth: settlement discipline is the product.
We talked about two archetypes: third-party aggregator-led models and utility-led models. Person B argued that the governing structure depends on state rules: who can access customer data, who can bid into markets, and what program designs allow.
Data access is the bottleneck. Person B was skeptical that residential meter data becomes easy without either utility-by-utility agreements or regulatory mandates for standardization. Without that, you get messy workarounds that don’t scale cleanly when large money and compliance are on the line.
Program design can cap value even when devices are capable. Person B pointed to constraints like not being able to monetize below a baseline of zero (“you can’t go negative”), which can compress battery economics. The difference between a great VPP and a mediocre one can be a single tariff clause.
Competition is nuanced. In ERCOT, REPs can be formidable because they already have meter data and commercial incentives to build in-house VPP offerings. OEMs can also be quasi-competitors when they try to bundle device sales with grid services. But OEMs often don’t want to run compliance and M&V disputes, which creates room for specialist aggregators.
What actually matters for scaling? Auto-enrollment and OEM compatibility. If programs force customers to jump through hoops, you cap participation. If the ecosystem is narrow, you cap fleet size.
Further notes (why this matters for the macro power story): VPPs will be asked to do more—peak management, demand response, local distribution support—precisely because large-load growth is stressing the system. But the growth curve will be shaped by whether regulators and utilities solve the data access problem.
A key underappreciated factor is working capital. Many VPP stacks involve paying incentives up front and getting paid after settlement cycles. If settlement is slow or disputed, the aggregator’s balance sheet becomes a gating factor.
If interval meter data becomes standardized and near-real-time, VPP participation becomes far more bankable. If it stays fragmented, VPPs remain a patchwork of pilots and bespoke program hacks. In that world, VPPs don’t disappear—they just scale slower than the narrative.
Conversation 9 — OKLO / nuclear as a hedge: policy realism vs hype (Person G: formerly Meta Global Affairs & Policy; now Kekst CNC)
I spoke with Person G (formerly Director of Global Affairs & Policy, AI & Emerging Tech Experiences at Meta; now a Partner at Kekst CNC) about why hyperscalers are flirting with advanced nuclear and what the partnerships actually mean.
Person G’s framing was pragmatic: site selection is ruled by power availability, time-to-power, and scalability. Grid-anchored sites are predictable but increasingly constrained by interconnection queues and rising demand, which pushes hyperscalers to explore near-site and on-site solutions—fuel cells, gas turbines, and, increasingly, nuclear.
They described nuclear partnerships as part of a portfolio of bets, each with a different maturity timeline. Fuel cells and gas can play near-term roles; nuclear is a longer-term hedge. In that sense, the partnership is less a delivery plan for 2027 and more a strategic option for the 2030s.
Person G emphasized that regulatory and execution risks are the biggest challenges, especially for first-of-kind reactors. Improved government relationships and shifting sentiment help, but they don’t eliminate licensing timelines, supply chain constraints, construction complexity, or the need for credible financing structures.
We talked about why these deals show up now. One reason is signaling: partnerships can demonstrate seriousness about clean reliability and long-term planning. Another is option value: if grid scarcity persists and policy continues to value low-carbon firm power, early partnership positions can matter.
The communications and stakeholder environment is crucial. Nuclear projects are uniquely sensitive to public perception, permitting complexity, and geopolitical narratives. Person G’s policy/comms background made them emphasize that execution is not only engineering; it’s relationship management with governments, regulators, and local communities.
Further notes (how to conduct nuclear diligence): Person G’s “nuclear as a hedge” framing is the most realistic way to interpret the current wave of announcements. A partnership can do three things at once: (i) secure early access to a scarce future resource, (ii) signal seriousness to stakeholders, and (iii) create a real option if grid scarcity persists.
But the diligence question isn’t just “is the technology cool?” It’s the credible pathway through licensing, financing, supply chain, construction, and contracting. Who carries FOAK risk? What is the siting pathway? What is the offtake structure? How does the project survive schedule slip?
We also discussed the time-horizon mismatch. Hyperscalers want capacity fast; nuclear often lives on a decade-scale pathway. That’s why the “ladder” approach makes sense: near-term modular (fuel cells/gas), mid-term grid expansions/PPAs, long-term nuclear options.
The takeaway: nuclear partnerships are meaningful as strategic positioning, but they shouldn’t be confused with near-term capacity delivery.
Conversation 10 — CleanChoice / solar developer distress: capital structure is destiny (Person H: GE Ventures; BlueMountain/Infranext; board at FLS Energy; now XIP / OTG)
I spoke with Person H (formerly a board director at FLS Energy; previously at GE Ventures growth equity; later at BlueMountain’s infrastructure effort and Infranext; now in principal investments at Expedition Infrastructure Partners (XIP) and also CFO at OTG Acquisition Corp.) about the solar developer world—and why the next chapter will likely be defined by restructurings.
Person H described a war story: taking an insolvent solar hot water developer and turning it into a U.S. PV developer by restructuring the balance sheet, converting debt to equity, and exploiting tax credit structures—especially the ability to bifurcate state and federal credits and package smaller projects around guaranteed tariffs. The point was not nostalgia; it was to illustrate how much of renewables success historically has been about capital structure and structuring ingenuity.
Their present-day view was pessimistic. Person H argued that today’s utility-scale solar market is volatile in both EPC costs and PPA pricing, driven by tariffs, supply constraints, and labor. That volatility makes it hard to price projects accurately—and when you can’t price accurately, fragile capital structures snap.
Their strongest claim was simple: developers backed by a single strong equity sponsor are better positioned, while developers with fragmented capital stacks are exposed to distress and potential liquidation. This isn’t moral judgment; it’s balance-sheet math.
We discussed how distress cascades. If EPC costs rise after contracts are signed, or if interconnection delays push COD out, the economics can break in multiple places at once: tax equity assumptions shift, debt sizing tightens, and offtakers renegotiate if market prices move. Meanwhile, liquidity gets consumed. If the platform has multiple capital layers—preferred equity, common, development loans, project debt—the waterfall becomes contentious and slow.
A strong sponsor provides options: bridge financing, portfolio support, the ability to wait for better pricing, and the ability to restructure without liquidation. Without that support, developers can become forced sellers, and buyers price that pressure.
Person H also emphasized that variance—uncertainty itself—is deadly. Investors hate uncertainty. If a platform can’t reliably forecast build costs or COD windows, its cost of capital rises, compressing value and potentially triggering the very distress the plat’s trying to avoid. It’s a feedback loop.
Further notes (why this matters beyond solar): The solar platform story is a microcosm of the broader time-to-power bottleneck. Even “simple” projects can be constrained by transformers, switchgear, labor, and permitting. When those constraints persist, the market rewards balance-sheet strength and punishes fragile stacks.
If you zoom out, you should expect more consolidation, more distressed sales, and a premium for developers who have both procurement discipline and resilient capital structures. In a volatile environment, “developer quality” is often shorthand for “ability to survive schedule slip without forced selling.”
The takeaway: in the next phase of renewables development, capital structure is not a footnote—it’s the product.

