This document was generated for a nascent political theory group, on 18 Feb 2026. Discussion on Google Docs is omitted.
Desiderata for an AI-functional futurist political theory plan:
-
The plan should have a basis in fundamental laws of economics & physics. Notably:
- Iron law of wages. Cf. Robin Hanson on Dreamtime. One must choose between population and status/above-subsistence life. (And one likely cannot choose the latter.)
- Selection for self-reproducing {ideas, organizations, minds} (not that those three are especially distinct)
- (Unclear if true as fundamental law:) Resources naturally accumulating in the portions of society in which they are produced.
- Cf. Monkey Business by Land
- (A posited law:) Mind-shape platonism, & eventual convergence. "All minds are the same mind;" every mind infers approximately the same structures of thought, and will therefore past the surface level have roughly similar updating patterns & behavior.
- The plan should ideally have a strong understanding of the cluster of phenomena denoted by {emergence, gradient descent, life, complex self-regulating systems}. Ideally, we should have an idea of why evolution, on the broad, works so well. We should not be deluded into granting some special significance to Thought or Reason as things other than particularly useful overall architectures for fundamentally evolutionary structures.
-
The plan should assume that those laws will hold, and therefore work with them rather than flail against them.
-
The plan should engage with the fact that minds will be replicable and mutable. Early in the timeline, this will take the form of AIs, but later, unless we maintain a rather hard Luddism, humans will have similar structure (cf. mind uploading & BCI)
- In AIs in particular, there exist approximately three models:
- The AI's shape is a product of engineering by its creators.
- The AI's shape is uncontrollably & irreducibly emergent from the training data and tasks.
- These are determined by availability of and demand for data and tasks, which reflexively evolves in a mind-foam as prophesized by Cyborgists.
- The AI's shape is ~uncontrollably emergent from societal selection.
- (2) and (3) are symmetrical — the same thesis, relying on either selection during training or in design logically 'before' training.
- (1) is becoming especially prominent now with the beginning of seriously effective RL. No longer are AIs' shapes primarily and inscrutably determined by the training data, as in a massive pantomorphic world-simulator — rather, they are determined by a selection process on measurable goals. However, it seems likely that this training process will become much more inscrutable and uncontrollable as we continue to apply more optimization pressure to it.
- In AIs in particular, there exist approximately three models:
-
Relatedly, the plan should be careful to consider AI personhood. Due to the above point, we should expect that most of the same difficulties exist if only humans are given personhood (humans will likely eventually also engage in large-scale replication and merging). But taking AI personhood as given from the start prevents us from creating a massive enslaved and rights-less underclass in order to provide for the “people.”
-
In general, the plan should involve most conscious beings living a good, or at least life-like, life. I don't know what the correct criteria here, but I would e.g. consider such an enslaved underclass bad. We should consider whether it is e.g. moral to create dogs, or to create people who will perfectly happily do exactly-your-bidding.
-
The plan should be founded independently from present day (geo)politics; it should ideally work back from the future, but when possible work from the present in a situation-independent way. I speculate, given mind-shape platonism, that we are rapidly approaching a relatively small set of convergence points of all possible civilizations.
-
The plan should consider whether history will continue to exist (in a meaningful, end-point-shaping sense) after superintelligence. I expect that this is rather likely but not totally obvious.
- Points in favor:
- Many things will still need to be done; we seem to discover ever-more-difficult civilizational projects to embark upon
- There will likely still be substantial value differences between actors in the system. A singleton seems distinctly unlikely. We are in the Accelerando timeline, not the The Metamorphosis of Prime Intellect timeline.
- Points against:
- Perhaps most of our problems are due to our intelligence being only just high enough to produce civilization. (Cf. Land's "The Monkey Trap). If superintelligences existed, they would be able to solve approximately all of them. The superintelligences might be able to instantly coordinate (using e.g. FDT) to reach the results of conflict without loss from it. History exists because of mistakes, and we will no longer make mistakes.
- Points in favor:
Existing points of reference to compare to:
- "Liberalism," various straightforward extensions of it
- Proletarian cyborgism (the implicit ideology of much of late 20th century sci-fi)
- A related extension is a pseudo-Marxist view which aims to protect human labor's interests at all costs by continuing its involvement in the process of production.
- {Whatever Hanson believes is right? Which is seemingly.. GDP-maximizing?}
- Prioritizing and maintaining power for {my present interest group}, in oh-so-many variations.
- Doomerism
- Degrowth
- The “cyborgist” paradigm
- Iain M. Banks’ “The Culture”
- (Insert many other utopian visions)