The economic transformation driven by advanced artificial intelligence remains uncertain, yet its potential scale demands proactive policy planning. As AI systems grow more capable and are integrated across industries, they are increasingly being used to complete entire tasks autonomously, reducing the need for constant human collaboration. This shift, observed since the launch of the Anthropic Economic Index, suggests a future where AI significantly influences productivity and labor dynamics—though the precise outcomes remain unclear.
Policymakers face a complex challenge in preparing for these changes. Given the wide range of possible economic trajectories, from mild labor market adjustments to rapid displacement and widening inequality, a flexible and scenario-based approach to policy development is essential. To support this effort, Anthropic has collaborated with economists and policy experts globally, including members of its Economic Advisory Council and participants in the Economic Futures Symposium, to identify a diverse set of policy options.
These proposals fall into three broad categories based on the pace and intensity of AI’s economic effects. In scenarios where labor market disruptions are limited, foundational reforms could prove beneficial regardless of AI’s ultimate impact. These include expanding workforce training through government-funded grants, such as a model proposing $10,000 annual subsidies per trainee in the U.S., funded possibly by reallocating higher education support or taxing AI usage.
Another key area involves adjusting tax incentives to favor human capital development. Currently, businesses can immediately expense investments in AI infrastructure, while deductions for employee training face restrictions. Reforming these rules—such as removing the $5,250 cap on tax-free educational assistance and allowing full expensing of job-related training—could encourage companies to retrain rather than replace workers.
Closing corporate tax loopholes is another proposal aimed at preserving public revenue. Experts like David Gamage suggest modernizing tax allocation to counter profit shifting, particularly as AI increases the value of intangible assets. Implementing market-based apportionment and requiring consolidated global reporting could prevent artificial relocation of profits to low-tax jurisdictions.
Accelerating permitting processes for AI infrastructure—such as data centers, power generation, and transmission lines—is also critical. Delays in U.S. regulatory approvals, often spanning years, hinder the development of essential computing infrastructure. Reforms to the National Environmental Policy Act (NEPA), federal expedited transmission authorities, and utility collaboration could reduce bottlenecks and support domestic AI development, avoiding national security risks from offshore infrastructure.
In moderate disruption scenarios—marked by measurable job losses and wage declines—more robust interventions may be needed. One idea adapts the Trade Adjustment Assistance model into an Automation Adjustment Assistance (AAA) program, initially funded at around $700 million annually. This could provide displaced workers with retraining and income support, with funding mechanisms tied to high-market-cap AI firms if expansion becomes necessary.
Taxes on computational resources or AI-generated tokens are also under consideration. Economists Lee Lockwood and Anton Korinek suggest that as AI systems become major economic actors, taxing compute usage or hardware accumulation could capture windfalls when labor and human consumption play smaller roles. While such taxes might affect AI companies’ profitability, they could generate vital revenue for social programs.
In fast-moving scenarios involving widespread job displacement and concentrated wealth, more transformative policies are proposed. Establishing national sovereign wealth funds with stakes in AI-generated returns could help distribute benefits more equitably. For instance, the UK-based Centre for British Progress has suggested an AI Bond to ensure broader public ownership of AI infrastructure and its economic gains.
Adopting or modernizing value-added taxes (VATs) is another option, especially as labor’s share of value creation declines. Six of the seven G7 nations and most OECD countries already use VATs; the U.S. does not. A consumption-based tax system could stabilize government revenues during technological upheaval and provide detailed economic data.
Finally, new revenue models may be required if AI drives a large portion of output. David Gamage proposes a low-rate business wealth tax alongside income taxes to reduce tax avoidance. Analogous to asset management fees, this system would treat wealth taxes as a “management fee” for legal protections and income taxes as a “performance fee” on profits.
These ideas do not reflect official positions of Anthropic but are intended to stimulate research and public discourse. The company has committed $10 million to expand its Economic Futures Program, supporting empirical studies and symposia to further explore AI’s economic implications. As uncertainty remains, early and inclusive dialogue among researchers, governments, and industry is crucial to ensuring that AI’s benefits are widely shared.
— news from Anthropic
— News Original —
Preparing for AI’s economic impact: exploring policy responses
How will the arrival of powerful AI systems change the structure of the economy? We are uncertain, and so are external experts. But as AI systems continue to improve, and are adopted at an ever-larger scale, it’s crucial there is more discussion about the tools policymakers could use to respond to AI ‘s economic impacts—whatever their nature. To help with this, we’re sharing several economic policy ideas that merit further study. n nSince launching the Anthropic Economic Index, we ‘ve observed an important shift in AI use. Users are becoming increasingly likely to delegate full tasks to Claude, “collaborating” with Claude less. As AI models continue to work independently for longer periods of time, and as more employers adopt AI to improve their productivity, we expect this trend to accelerate. The implications for the workforce are uncertain. n nHow should policymakers respond? This is not an easy question, nor is it one that any single actor can answer. There is great uncertainty about the scale of the transition ahead, and a wide range of views about how to manage it. But it is imperative to begin formulating ideas now for the economic scenarios we might find ourselves in. n nOver the past year, we’ve worked with economists and policy experts from around the world (including members of our Economic Advisory Council and participants in our first Economic Futures Symposium) to move this discussion forward. To generate a broad range of ideas, we’ve engaged with both non-partisan thinkers and those from across the political spectrum. n nBelow, we briefly explore nine of these categories of ideas, covering workforce development, permitting reform, fiscal policy, and social services. n nWhile we don’t know what the optimal policies will prove to be, we’re committed to sharing ideas in the open, and to being transparent about the economic effects of advanced AI. n nMatching policies to scenarios n nThe rate, scale, and form of AI ‘s economic effects will determine the policy responses that are necessary across the world. Accordingly, we ‘ve organized these initial ideas into three broad categories: n nPolicy ideas for nearly all scenarios, including those where negative effects on the labor market remain modest. These are policies that their advocates argue merit consideration almost regardless of how significant the disruption of AI proves to be. Given this, many of these proposals have been suggested in other contexts before. They include upskilling workers and students for emerging jobs, and reforming permitting processes to enable the construction of energy and computing infrastructure to improve productivity. n nPolicy ideas for scenarios with moderate acceleration, where AI leads to measurable wage declines and job losses for large portions of the workforce. Here, more substantial fiscal support for displaced workers might be needed. To offset negative externalities imposed on displaced workers from rapid automation, taxes on automation might be considered in this scenario. n nPolicy ideas for faster-moving scenarios, potentially involving dramatic job losses and worsening inequality. These proposals are much more ambitious, and are designed to respond to a starkly different economic picture. So far, ideas include using sovereign wealth funds to give citizens stakes in AI revenues, and finding new ways to generate government revenue. n nThe proposals below don’t necessarily represent Anthropic ‘s own policy positions. But we’re excited by the breadth of proposals we’ve received, and we hope they encourage further research and debate. n nPolicies for nearly all scenarios n n1. Invest in upskilling through workforce training grants n nAt our DC Symposium, Abigail Ball, Executive Director of American Compass, presented the Workforce Training Grant—a proposal she developed with colleague Oren Cass to direct public resources toward on-the-job training. n nUnder this model, governments would provide substantial annual subsidies (Ball and Cass suggest $10,000 per year in the US) directly to employers who create formal trainee positions with structured training programs. This training could take multiple forms: programs operated by individual employers, by employer consortia or industry associations, through partnerships between employers and organized labor, or by technical schools and community colleges working alongside employers. n nAmerican Compass proposes redirecting existing higher education subsidies to fund this program. But a range of other funding mechanisms might also deserve consideration—including the possibility of using taxes on AI consumption to support workforce development initiatives. n n2. Reform tax incentives for worker retention and retraining n nTax policy can, on the margin, incentivize employers to retrain and retain employees rather than reducing headcount. n nRevana Sharfuddin of the Mercatus Center argues that the US tax code creates a bias favoring physical capital investment over human capital investment. Businesses can immediately expense AI systems through bonus depreciation, yet face numerous restrictions when deducting worker training costs. She proposes reforms to the Internal Revenue Code, including eliminating the $5,250 cap on tax-free educational assistance, and extending full and immediate expensing to all job-related training. n nThese changes would aim to reduce the cost of retraining relative to the cost of layoffs, helping those workers whose positions might otherwise be on the margins of workforce reduction decisions. n n3. Close corporate tax loopholes n nTax policy expert David Gamage has outlined reforms designed to prevent AI transformation from straining government budgets. Several of his proposals involve closing the “partnership gap” that allows large businesses to avoid entity-level taxes, and modernizing tax allocation to combat profit shifting and better capture value from digital and intangibles-based business models. n nThe second reform would allocate business taxes based on customer locations through market-based apportionment, while requiring worldwide combined reporting to treat multinationals and subsidiaries as single entities. This approach is designed to limit artificial profit-shifting to tax havens—a practice that could become more prevalent as AI potentially further increases the economic importance of profits derived from intangibles. n nGamage argues that “governments that act first will solve their fiscal challenges and better position residents to thrive in an AI economy. Those that wait will face resource constraints when flexibility is most needed.” n n4. Accelerate permits and approvals for AI infrastructure n nAnthropic has consistently advocated for reforming permitting and power procurement processes in the United States and allied nations. Accelerating these processes is needed to develop the infrastructure to train and deploy frontier AI—that is, large-scale data centers, transmission infrastructure, and power generation facilities. Reforms will also unlock investment, economic growth, and job creation in the places where AI is built. Failing to accelerate AI infrastructure development will slow productivity and job growth, and it could introduce national security risks from vital AI infrastructure moving offshore. n nThree overlapping sets of U.S. regulatory processes delay building large-scale AI infrastructure for years. The first category is permits. These include a series of land use and environmental approvals at the federal, state, and local levels. Second, state regulatory reviews for transmission projects can cause buildouts of new lines to last 10 years or more. Finally, approvals to interconnect facilities to the electric grid typically take 4-6 years for generation resources. n nConcrete steps to address these challenges include reforms to the National Environmental Policy Act (NEPA), which requires federal agencies to review many projects’ environmental effects. Advance analyses of certain kinds of facilities, such as data centers, could help speed reviews of future projects. Other reforms could include leveraging federal authorities to expedite critical transmission buildouts and upgrades and collaboration with utilities to identify opportunities for fast interconnections. n nAs Tyler Cowen, faculty director of the Mercatus Center and member of our Economic Advisory Council, notes: “I am all for permitting reform—the energy sector included.” n nPolicy ideas for moderate scenarios n n5. Establish trade adjustment assistance for AI displacement n nSeveral economists are exploring how the Trade Adjustment Assistance (TAA) model – in which affected workers are given opportunities to obtain new skills, or receive other support – might be adapted for labor market disruptions in an era of powerful AI. Ioana Marinescu of the University of Pennsylvania, a member of our Economic Advisory Council, views TAA-like “AI insurance” as a mechanism “to support those who lose jobs due to AI.” n nAlong these lines, Suchet Mittal and Sam Manning have outlined a potential Automation Adjustment Assistance (AAA) program. They describe how funding AAA at levels similar to TAA—approximately $700 million annually—could be an initial option, with mechanisms built in to increase or decrease the size of the program in line with the pace and scale of AI-driven displacement. n nMittal and Manning note that if such a program needed to expand in the future, it could potentially be funded through taxes on AI-driven revenues from firms above a certain high level of market capitalization, creating a direct mechanism for the AI sector to support workers displaced by the technology. n n6. Implement taxes on compute or token generation n nUniversity of Virginia economists Lee Lockwood and Anton Korinek (a member of our Economic Advisory Council) propose studying of a range of taxes on “token generation, robots, robot services, and digital services.” n nThese taxes offer different potential benefits – and distortionary risks – depending on the stage of AI’s development within the economy. A tax on AI-generated tokens sold to end users (a “token tax”) might be desirable when humans remain dominant consumers in the economy, even if powerful AI reduces the relative economic role of labor. n nKorinek and Lockwood argue that, if the economy reaches a stage where powerful AI systems become themselves major consumers of the economy’s resources, taxing AI resource accumulation–e.g. via taxes on compute and other hardware–might be more effective than token taxes on human end users. Although these taxes on computational resources distort investment along an AI-transformed economy’s trajectory, they could become the only remaining mechanism to capture some of the windfall generated by AI if the role of both labor markets and human consumption in the economy declines. n nWe believe taxes in this broader category deserve serious study, even though they would directly impact Anthropic ‘s revenue and profitability. These taxes could provide crucial revenue for vital fiscal programs—including several others discussed in this post. n nPolicy ideas for fast-moving scenarios n n7. Create national sovereign wealth funds with stakes in AI n nA growing set of proposals aims to give citizens and governments greater stakes in AI ‘s economic returns. Sovereign wealth funds could enable states to acquire positions in AI-related assets. In scenarios where the AI sector captures an outsized share of economic wealth, government investment could both shape the sector ‘s behavior and “distribute AI-derived wealth more equitably.” n nWriting for the Centre for British Progress, Emma Casey, Emma Rockall, and Helena Roy have proposed a related concept for the United Kingdom: an AI Bond. The AI Bond would aim to ensure adequate investment in “the AI stack” to capture AI ‘s benefits and then distribute its returns more evenly across Britain—even as AI research roles concentrate in a few cities, like London. n n8. Adopt or modernize value-added taxes n nSix out of the G7 countries have national value-added taxes (VATs), as do 37 out of 38 OECD countries. The United States is the exception. n nAs AI transforms the economy, labor ‘s share of the production of value might decline significantly. A shift toward taxing consumption (as through a VAT) could become necessary to fund core government activities. VAT collection also provides governments with fine-grained information about the economic production network—which could be particularly valuable during this potential period of rapid technological and economic change. n n”Value-added taxes are non-distortionary and to an extent, self-enforcing,” notes John Horton of MIT ‘s Sloan School of Management, a member of our Economic Advisory Council. n n9. Implement new revenue structures to account for AI’s growing share of the economy n nIf AI is responsible for a large share of economic output (causing labor’s share to decline), governments might require new revenue streams to complement income tax. Another of David Gamage’s proposals is exploring a “low-rate business wealth tax” as a complement to income taxes. His reasoning: “Income taxes face accounting manipulation; wealth taxes face asset valuation challenges. Using both makes the system harder to avoid” for highly profitable enterprises. n nGamage analogizes this system to the fee structures that certain asset managers charge clients: “the wealth tax functions as a management fee for providing legal infrastructure protecting accumulated capital, while the income tax serves as a performance fee for profits generated in state markets.” This idea represents one way that governments might adapt to changes in the value of human labor, although we think there are many more ideas to be explored in this area. n nContinuing the conversation n nEarlier this fall, Anthropic announced a $10 million commitment to scale up the Economic Futures Program. This investment will support rigorous empirical research on AI ‘s economic impacts and policy ideas, as well as expand our Symposia series—beginning with an event in London this November, which follows our September event in DC. n nNone of the ideas outlined here represent definitive recommendations. They are starting points for deeper research, policy development, and public debate. The economic effects of AI remain uncertain in both timing and magnitude, and different scenarios will require different responses. n nWhat ‘s clear, though, is that proactive engagement between researchers, policymakers, and the AI industry is essential. By exploring these options now—before we know the shape of AI’s economic effects—we can better prepare for a range of possible futures, and ensure that workers and communities are well-placed to benefit from the full potential of AI. n nMost of the policy ideas discussed in this post have emerged from proposals from or conversations with members of Anthropic ‘s Economic Advisory Council, participants in our Economic Futures Symposia, and independent researchers. They do not all necessarily represent Anthropic ‘s policy positions.