Daily observations on the AI transformation, written by John Allsopp and Mark Pesce.
These signals represent our analytical observations about the technology and market landscape. They are commentary, not financial product advice. The mention of any company or financial product is not a recommendation to buy, sell, or hold. See our Terms of Service for full details.
30 APR 2026
GitHub Copilot moves to usage-based billing
On 28 April GitHub announced that Copilot is migrating from flat per-seat pricing to usage-based billing. Mark: "SIGNAL — the pricing models are changing everywhere."
Copilot was the canonical example of an "AI feature on a dominant SaaS surface" priced per-seat. Its move to consumption pricing is the cleanest single confirmation that flat-seat economics no longer cover frontier-quality inference cost at sustainable margin. The unit-of-value has shifted from the seat — the human-customer-of-record contract — to the call, which is the agent-customer-of-record contract.
The second-order implication is the more important one. Once the most defensible AI-feature-on-SaaS in the market moves to usage pricing, the rest of the SaaS-with-AI category has no defensible position against the same migration. Cost-management primitives — caching, retry-budgeting, attempt-deduplication, fallback — become first-order harness features when pricing is per-call rather than per-seat. The harness layer's competitive surface just expanded.
B200 spot prices now move on the model-release calendar
Tom Tunguz published an analysis on 28 April arguing that B200 GPU spot-market hourly rates visibly reprice around major model releases. Mark's reaction was succinct: "SIGNAL!!!!" The compute infrastructure layer has acquired a publicly-observable spot price curve coupled to frontier-lab output, not to quarterly hyperscaler procurement.
The structural implication is that compute is now a commodity with a real-time price curve, not a long-dated capex item booked on hyperscaler balance sheets. Once the spot curve is liquid, the ceiling on inference price is set by the cheapest capable open-weight release — and resets each time a DeepSeek-tier publication lands.
This is the financial-tier marker that distinguishes a winning infrastructure layer from a losing one. The companies positioned to be the marginal liquidity provider on this curve win the next phase. Microsoft's strategic value to OpenAI was capacity certainty; in a world where compute is repriced on the model-release calendar, certainty is replaced by liquidity.
OpenAI and Microsoft published parallel posts on 27 April formalising a restructured partnership. The Register's framing was the cleaner one — "open relationship". The substantive change: Microsoft remains OpenAI's primary cloud partner and OpenAI products will continue to ship first on Azure, but OpenAI may now turn to other providers if Microsoft lacks capacity or declines to support new features. The escape valve is in writing.
The single most valuable durable contractual asset Microsoft held in the AI stack — exclusive cloud rights to OpenAI's frontier inference — has been negotiated down to first-call rights. Read against Google's US$40bn cash-and-TPU investment in Anthropic last week, the message is that the era of single-cloud frontier-lab pairings is closing. Frontier inference is too capacity-bound and too geopolitically exposed to be locked to one hyperscaler.
For Microsoft holders, the strategic premium baked into the OpenAI relationship has been repriced down. For OpenAI counterparties, the model is now multi-source by contract. For the rest of the hyperscaler tier, the prize on offer this quarter is being the marginal liquidity provider on a compute curve that is increasingly priced like a commodity.