OpenAI has put forward economic proposals, and here’s DC’s take on them


Happy ceasefire day And welcome organizerNewsletter for edge Subscribers about Big Tech’s difficult journey through the world of politics. If you are not a subscriber yet, You can do that hereBut my only request is that you sign up before Donald Trump decides to reconsider his previous threats towards Iran and start World War III.

I’m back after last week suffering from a deadly combination of a mild cold and the start of pollen season. (Public green spaces occupy twenty-one percent of the district’s area, and the capital is ranked consistently Best city park system in America. Unfortunately, I’m allergic to every tree and every grass.) If you have tips on anything I may have missed or anything I should know about the coming weeks, send them to tina.nguyen+tips@theverge.com.

Do you really believe anything OpenAI says?

On Monday, OpenAI published 13-page policy paper Addressing the impact of artificial intelligence on the American workforce. The company has also proposed what it believes is the solution: impose higher capital gains taxes on companies that replace their workers with artificial intelligence and use that money to create a larger public safety net. Its solutions included creating a public wealth fund, a four-day work week funded by “efficiency dividends,” and government programs to help workers transition to “human-centered” work, all funded by the abundance that artificial intelligence would provide.

Unfortunately, it was released that day The New Yorker‘s Ronan Farrow and Andrew Marantz Post a Thoroughly reported, 17,000+ word article the date Sam Altman A history of lying to everyone around him, including his supporters in Silicon Valley, his employees, his board of directors, and lawmakers trying to regulate AI. the The New Yorker This article reinforced a long-standing narrative about Altman, and by extension OpenAI: They may propagate idealistic values, but they quickly dispose of them for financial and political gain.

Several people I spoke to said that this paper was itself quite positive for AI governance in general, as it introduced new ideas into the political discourse around the emerging technology. But unless the company’s policy and political influence deliver on those promises, OpenAI’s critics have said, it may just be a piece of paper.

“I think there are people on the team who care about things, who have thought a lot about this document and are proud of it, and have done a good job, even if it doesn’t address all the questions I hope it does.” A little BourgogneThe CEO of the Machine Intelligence Research Institute (MIRI) told me. “There’s still a question: Are these people going to find themselves in the situation that many of the previous people at OpenAI have found themselves in, where they thought the company had certain values ​​or aligned with things they cared about, and then they ended up finding out that wasn’t the case, and they became disillusioned and left?”

With the OpenAI policy proposal, it is useful to look at its history with government, which The New Yorker Details of the piece in depth. Altman was one of the first major CEOs to publicly advocate for federal oversight of AI, going so far as to propose creating a federal agency to oversee advanced models in 2023 — but he has privately worked to crack down on laws containing his safety proposals. A California legislative aide accused OpenAI of engaging in “increasingly underhanded and deceptive behavior” to repeal a 2023 AI safety bill that it had publicly supported. In 2025, the company called on supporters of a California statewide AI bill in an effort, one such supporter said. The New Yorker“Basically scaring them into silence.” Although Altman was once She worked extensively with the Biden administration to build AI safety standardsThe moment Donald Trump became president, Altman successfully convinced him to scrap initiatives he once championed.

Nathan Calvingeneral counsel at Encode, an AI policy nonprofit where he focuses on state legislative initiatives, received one of these subpoenas. “What I saw of their involvement in politics and government affairs was very bad,” he told me. While he believed the team that wrote the OpenAI proposal, which was primarily on the technical safety research side, was acting with good intentions, he was still reserving judgment. “Will these people still be engaged as we move from public policy principles toward the many other ways in which lobbying and government influence actually happen? Part of me is optimistic, but a lot I’m also very skeptical about whether that will happen. (OpenAI did not return a request for comment.)

Humble, absolutely no Cowardly request:

Next week I plan to run a case organizer Cataloging the wittiest events happening during the Nerd Prom, also known as the White House Correspondents’ Dinner circuit. If you’re a tech founder, a tech company, or someone doing something tech-related and holding an event during WHCD Week, please let me know what you’re up to! From what I’ve heard so far, the tech world is about to change the normal social dynamics this week — you’ve already heard the Grindr party in Georgetown, the Substack party, That looks like the famous maxxer Clavicle He attends — and I’m very, very excited to put together the craziest “spotted” column Washington has ever seen.

(Again, this depends on whether we are at war with Iran by the end of April, in which case I imagine no one would be prepared to be reckless.)

And speaking of DC reporters, this applies to all of us:

Screenshot via jakewilkns/X.

Screenshot via jakewilkns/X.

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *