See What's Inside

Knowledge Base

OpenAI's Vision for Everyone Else's Economy

customer ai future work
 
An $852 billion company has published a list of things other people should do.
 

The paper, Industrial Policy for the Intelligence Age, is generous in scope and uncharacteristically modest about its own role. Governments should raise taxes on capital. Employers should subsidize four-day work weeks at full pay, top up retirement contributions, and cover more healthcare. Citizens should accept a Public Wealth Fund holding stakes in AI companies, including, presumably, OpenAI itself. The state should expand electricity infrastructure, hand out tax credits, and consider equity stakes in AI buildouts. The proposal that AI itself be "treated like a utility, affordable and widely available" is delivered with the conviction of a company that, on a separate page, has asked the state for subsidies, tax credits, and equity stakes. Utilities, as a rule, are also regulated. This is presumably an oversight.

Set this against the OpenAI we know from the rest of the news cycle. The company converted from a non-profit to a for-profit last year on the grounds that the technology was too important to be constrained by its previous structure. Its president, Greg Brockman, has donated millions to President Trump and helped fund a super PAC, Leading the Future, which has spent more than $2 million attacking Alex Bores, a New York congressional candidate. Bores's contribution to the policy debate is a proposal to fund direct payments to Americans by taxing AI — exactly the sort of redistributive measure OpenAI's white paper claims to support.

The president of OpenAI is paying to defeat him.

Hold the two together, and a clean pattern emerges. The state pays. Employers pay. Workers reorganize. AI labs accelerate.

Last week I argued that AI is being mistaken for a monetary policy problem — that productivity claims are being treated as macro forecasts when they should be treated as sales pitches. The Industrial Policy paper is the more revealing tell. It is the same productivity claim, repackaged as a request that everyone else absorb the cost of believing it. It is also, by the standards of OpenAI's own CEO five years ago, a remarkably soft request. In a 2021 essay, Sam Altman proposed taxing land and AI-company shares directly, on the grounds that AI would shift power from labor to capital and someone would have to put the resulting wealth back in motion. The 2026 paper rows back to vaguer instruments and longer timelines.

Steel-man first. There is a real argument that a productivity revolution this large will produce collateral damage no political system will tolerate, and that prior productivity discontinuities, from the spinning jenny through the assembly line, eventually triggered institutional responses we now regard as obvious: labor law, social security, deposit insurance, antitrust. It is not absurd to ask whether AI may eventually require an analogous settlement. The question is not whether the policy debate is worth having. The question is who gets to draft the brief.

At OpenAI, the brief is drafted by people whose job is to draft it favorably. When the political operative Chris Lehane joined the company in April 2024 to lead its global affairs team, his role, according to internal sources cited by the New York Times, included deprioritizing research the company found inconvenient: on AI's environmental footprint, on the gender and urban-rural gaps in ChatGPT usage, on how ChatGPT shapes users' career decisions, on long-run economic forecasting. His reported framing of the policy was that the company would not release something about a problem until it had a solution for it. The white paper is, in this light, the same instinct in its public-facing form.

The Free-Market Tell

OpenAI has identified labor, capital, government, and the citizenry as the parties responsible for managing the disruption it is creating, leaving itself in the interesting position of being the only stakeholder in the document without an assignment.

Read alongside the rest of the paper, that absence is the first thing to notice. The document is not a description of what is happening. It is a description of what the authors would like to happen.

If AI-driven productivity gains were as large and inevitable as OpenAI insists, you would not need legislation to redistribute the time savings to workers. You would observe firms voluntarily moving to four-day weeks, because the productivity uplift would make it profitable. We are not observing this. What we are observing instead is firms running headcount reductions and pocketing the savings, which is what capitalism has always done in response to a labor-replacing technology, and what every introductory economics textbook predicts it will do.

Reducing the working week from five days to four while paying for five is not a feature of free markets. It is a redistribution. There is nothing wrong with redistribution as a policy choice. But OpenAI is asking governments and employers to make that choice now, on the strength of productivity claims that have not, so far, shown up in either the macroeconomic data or in OpenAI's own financial statements. You could think of it as a futures contract on AGI, written by the buyer.

OpenAI is also openly trying to deliver on the contract. The company runs an internal benchmark called GDPVal, which measures its models against the work of investment bankers, management consultants, real estate brokers, news analysts, and forty other professions. The current win rate, per OpenAI, is over 80 percent. The white paper is concerned about the people whose jobs the benchmark is concerned with replacing.

The Four-Day-Week Coalition

The four-day-week proposal sits inside a politically unusual coalition. It is supported by minimum-wage and living-wage advocates, who like the idea of legislating a structural improvement in working conditions. It is supported by basic-income utopians, who treat any reduction in time-for-money exchange as moral progress. It is supported by AI-doomers, who believe the technology is dangerous enough to require pre-emptive economic restructuring. And it is now, conveniently, supported by the AI labs that need a story for why corporate buyers should keep paying for productivity gains.

It is not the only proposal of its kind. The robot tax, originally floated by Bill Gates in 2017, would require the robot to pay the tax the human it replaced would have paid. Whether the robot will also be permitted to itemize has not been clarified.

When a policy idea attracts a coalition that broad, the question to ask is not whose argument is correct. It is whether each member of the coalition is actually arguing for the same thing. Living-wage advocates want a higher floor. UBI advocates want decoupling. Doomers want a brake. AI labs want a sales channel that survives the labor-cost-cut response. These are not the same goal, and no single piece of legislation will satisfy all of them. The proposal is a Rorschach test, and OpenAI is comfortable with that, because in a Rorschach test the company's own interests can hide in plain sight.

The same instinct is at work in the Public Wealth Fund proposal. It would give every American an automatic stake in the AI companies, including, in a fortunate coincidence, OpenAI. On one reading, this is a redistributive concession. On another, it is a captive buyer for OpenAI's next capital raise, written into legislation. If the technology delivers as advertised, the public participates. If it does not, OpenAI has a valuation that requires continuous justification and a public-sector counterparty whose political interest is now aligned with the company's continued growth.

Oppenheimer Had Doubts

In 1945, after Trinity, J. Robert Oppenheimer told President Truman that he felt he had blood on his hands. Truman, who had the literal blood on his, found this irritating. Oppenheimer spent the rest of his career campaigning, often unhelpfully for his own reputation, against the technology he had helped create. Whatever else you think of him, he behaved as if the moral problem were real.

The current generation of AI leaders, at OpenAI and elsewhere, describe their technology in similar terms. It may displace tens of millions of workers. It may concentrate wealth at a scale the political system cannot absorb. OpenAI's warning that AI may slip beyond human control is, as warnings go, unusual in that it has been issued by the people most actively ensuring that it does. They then go back to building it faster, raising more capital, lobbying for lighter regulation, and writing policy papers explaining how the rest of society should absorb the disruption they are choosing to create.

The two postures are difficult to reconcile. If you genuinely believe your technology is severe enough to require a New-Deal-scale realignment of the tax base, the social safety net, and the structure of employment, the consistent move is to slow down until that realignment exists. The visible move is the opposite. Accelerate, build out, and write the policy paper that asks everyone else to catch up. You can pick either position and defend it. You cannot pick both and remain credible.

The document closes with a reference to the New Deal. The New Deal was, of course, written by elected officials, not by the corporations whose excesses had triggered it.

What This Means for a CEO Buying AI

For a CEO reading this, the question is not whether OpenAI's industrial policy gets enacted. Most of it will not. The question is the pattern of behavior the document reveals. The companies building the AI you are about to depend on are simultaneously making the case that the economy as currently configured cannot absorb their product, lobbying to prevent the regulation of that product, and asking governments and employers to underwrite the transition cost.

You can still buy AI from these companies. You should. The productivity case for narrow, well-deployed AI is real, even if the AGI case is largely promotional. But buy with eyes open. The policy theatre tells you something about how the seller frames its own role: as the engine of disruption, not the underwriter of it. Price your AI strategy on the assumption that the costs of disruption, to your workforce, to your customers, to your operating model, will land on you, not on the lab that built the model.

Free markets have absorbed every productivity discontinuity from the spinning jenny onward. They are messy, occasionally cruel, and slower than tech leaders find comfortable. They have never, until now, required the people building the new technology to draft the policy for the people displaced by it. That is the most interesting line in OpenAI's paper, and it is the one OpenAI did not write.

 

---

Sources: Rebecca Bellan, "OpenAI's vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek," TechCrunch, 6 April 2026; Jasmine Sun, "The San Francisco Consensus on AI and the Permanent Underclass," New York Times, 30 April 2026; OpenAI, "Industrial Policy for the Intelligence Age" (2026); Sam Altman, "Moore's Law for Everything," March 2021; Bill Gates, interview with Quartz on the robot tax, February 2017.