Silicon Boundaries
Why AI Could Push Society Past Points of No Return
Most people now accept that the natural world has limits. Emit too much carbon, destroy too many ecosystems, acidify the oceans beyond a certain point, and the damage becomes irreversible. The planetary boundaries framework, developed by Johan Rockström and colleagues, gave us the language and the science to understand those thresholds. Seven of nine planetary boundaries have now been crossed.
Today, we publish a new report arguing that a similar logic applies to the technologies powered by silicon and compute: AI, social media, autonomous agents, robotics, and the broader digital infrastructure that underpins them. We call these thresholds Silicon Boundaries.
The core idea is simple. Societies, like ecosystems, have conditions they need to function. They need trust between people. They need institutions that work. They need economic systems that allow broad participation. They need a population healthy enough, physically and mentally, to engage in civic and economic life. These are not luxuries. They are preconditions for everything else, including the financial returns that investors depend on.
A Silicon Boundary is the point at which the scale or the nature of AI and related technologies begins to erode one of these preconditions. It is the threshold where the costs start to outweigh the benefits, and where continued deployment without adjustment pushes society towards instability. Cross enough of these boundaries, and you risk irreversible tipping points from which recovery may not be possible.
Nine categories, one system
Our preliminary framework identifies nine categories of Silicon Boundaries spanning information integrity, social cohesion, political stability, economic participation, physical and mental health, safety and security, financial stability, rights and consent, and environmental sustainability. Within these categories we identify over thirty individual boundaries.
Consider a few of them concretely.
The information environment is degrading fast. Synthetic media and AI-generated content have industrialised disinformation. Algorithmic amplification favours engagement over accuracy. Trusted intermediaries are weakening. The boundary question is: at what point does the information environment become so unreliable that collective decision-making breaks down? If people cannot agree on basic facts, governance, market pricing, and social trust all fail together.
On mental health, the evidence is already striking. The US Surgeon General has described loneliness as an epidemic; it increases mortality risk by roughly 26 per cent, comparable to smoking 15 cigarettes a day. Research links excessive screen use to negative mental health outcomes across a range of measures, with harms increasing as daily hours rise. Many young people are well past any plausible safe threshold. The broader costs of mental ill-health already represent approximately 1.7 per cent of US gross consumption, a permanent recession-equivalent drag that exceeds even the bolder estimates in the climate economic impact literature.
On labour markets, AI does not replace a discrete set of technical skills in a single sector the way energy transitions strand fossil fuel jobs. It replaces intelligent labour: a generic, general-purpose set of capabilities. Worse, there is evidence that AI may prevent new skills from developing: reporting from Colombia found that introducing AI tools in schools was associated with declining student performance. The technology does not just strand current workers. It risks hollowing out the future workforce.
Unlike carbon, boundaries move
The crucial difference from planetary boundaries is that Silicon Boundaries are not fixed. With climate, you have relatively clear physical thresholds: 1.5 degrees, 2 degrees, the point at which ice sheets collapse. The physics does not negotiate.
With Silicon Boundaries, the thresholds depend on context: how the technology is deployed, what safeguards exist, how society responds, and what alternatives are available. Boundaries can be pushed outwards through stronger regulation, better digital literacy, investment in real-world community infrastructure, and robust institutions. All of these increase the amount of compute society can absorb without tipping into instability.
Boundaries move closer when institutions weaken, when regulation fails to keep pace, when technology is designed to exploit vulnerabilities, and when wealth concentrates so sharply that political systems are captured by the winners. In those conditions, the thresholds arrive sooner, at lower levels of technology deployment.
And the same volume of computing power produces entirely different boundary dynamics depending on how it is used. Compute deployed for diagnostic healthcare AI and compute deployed for addictive social media feeds are not the same thing. The nature of compute matters as much as the scale.
The boundaries interact
These boundaries do not operate in isolation. They reinforce one another, just as planetary boundaries do.
Take a concrete example. AI-driven labour displacement pushes people out of stable employment. Economic insecurity increases susceptibility to populist narratives and conspiracy theories, degrading the information environment. A degraded information environment undermines democratic deliberation, weakening the institutions that would otherwise regulate the technology. Weaker institutions mean less capacity to provide the retraining and social safety nets that would push the economic boundary outwards again. That is a reinforcing loop, the kind that produces tipping points where systems move from stability to instability and cannot easily return.
The Silicon Bubble
If society acts to stay within Silicon Boundaries, by constraining certain forms of compute through regulation or consumer choice, then capital currently allocated to AI infrastructure may be significantly mispriced. We call this the Silicon Bubble, drawing on our previous work on stranded assets and the carbon bubble.
The scale of potential exposure is large. Morgan Stanley estimates $2.9 trillion in data centre capital expenditure for 2025 to 2028 alone. Technology sector market capitalisation is two to three times larger than fossil fuel companies were at the centre of the original carbon bubble analysis. Financial markets currently expect tech firms to capture more than two-thirds of all stock market earnings growth over the next five years. If those expectations prove wrong, the correction will be systemic.
Not anti-technology
We want to be clear: compute is not inherently harmful in the way that burning fossil fuels is. AI and related technologies offer enormous beneficial opportunities when deployed well. A Goldilocks outcome, in which compute scales in line with societal welfare, remains entirely possible. But it will not happen by default. It requires deliberate action from governments, companies, investors, and citizens.
Our report presents four scenarios for how the tension between compute expansion and societal risk might resolve. In two of them, Goldilocks and Luddite, boundaries hold. In the other two, Gilded Cage and Icarus, they do not. The difference is not the technology itself. It is the choices societies make around it.
The fact that the science is less settled than the physics behind planetary boundaries does not mean these boundaries do not exist. Nor does it mean we can afford to wait and see. The stakes are too high, and the speed of deployment too fast, for that luxury.
The report and accompanying podcast are available here.
Ben Caldecott is Director of the Oxford Sustainable Finance Group and the Associate Professor of Sustainable Finance at the University of Oxford. Jakob Thomä is CEO and Co-Founder of Theia Finance Labs. Silicon Boundaries is a joint research programme between Theia Finance Labs and the Oxford Sustainable Finance Group.
References
For the full evidence base and bibliography, see the report.
Abramson, B., Boerma, J. & Tsyvinski, A. (2024) ‘Macroeconomics of mental health’, NBER Working Paper No. 32354. Link
Caldecott, B., Clark, A., Koskelo, K., Mulholland, E. & Hickey, C. (2021) ‘Stranded assets: Environmental drivers, societal challenges, and supervisory responses’, Annual Review of Environment and Resources, 46, pp. 417–447. DOI
Devi, K.A. & Singh, S.K. (2023) ‘The hazards of excessive screen time: Impacts on physical and mental health’, Journal of Education and Health Promotion. DOI
Office of the Surgeon General (2023) Our Epidemic of Loneliness and Isolation, U.S. Department of Health & Human Services. Link
Planetary Health Check (2025) The Planetary Health Check. Link
Rockström, J. et al. (2009) ‘A safe operating space for humanity’, Nature, 461, pp. 472–475. DOI
Rodríguez Salamanca, L. (2025) ‘Meta brought AI to rural Colombia. Now students are failing exams’, Rest of World. Link




