All content on this page is freely available in the public domain. It is sorted chronologically from oldest to newest. This means the first works on the page are generally of the lowest quality, and show the least development of my ideas. If you want to see the best works, you should read the highlights on my home page. For brief discussion of why this page exists, read Genesis. Oh, and also, Genesis.
Inspired by JD Pressman.
About a year ago, researchers were able to apply deep learning models to EEG data to decode imagined speech with 60% accuracy. They accomplished this with only 22 study participants, contributing 1000 datapoints each. Research in using deep learning with EEG is relatively early and not very powerful, but this paper presents an early version of what is essentially mind-reading technology. As scaling laws have taught us, when machine learning models express a capability in a buggy and unreliable manner, larger versions will express it strongly and reliably. So we can expect that the primary bottleneck for improving this technology—apart from some algorithmic considerations—is gathering data and compute at a large enough scale. That is, if we managed to get millions of datapoints on imagined speech, we would be able to easily build robust mind-reading technology.
But this sort of data/compute scaling has historically been difficult! Particularly, it's very hard to get clean, labeled data at scale, unless there's some preexisting repository of such data that you can lever. Instead, the best successes in deep learning—like language and image models—have come from finding vast pools of unlabeled data and building architectures that can leverage it.
So, how could we apply this to EEG? I suspect that we'll want to get data that's minimally labeled, and maximally easy to harvest. Like with language models, I think we should train a machine learning for autoregression on unlabeled EEG data. We would collect EEG data from a large sample of individuals going about their daily lives, and train a model to predict new samples from previous samples. We'd probably want to use a different architecture than language transformers—at minimum training on latent z rather than direct EEG samples—but deep learning research is unlikely to be the bottleneck. [1]
How much data would we need? Let's say we wanted to create a model on the scale of Llama-70b. Powerful enough to be humanlike, coherent, and often smart. Llama-70b needed 2 trillion tokens. We could probably wring out a bit more performance per token by increasing model size, decreasing our necessary tokens to 1.5T. We could also run the data for, say, 15 epochs, reducing our token need to 120 billion. If it's natural to divide thought into 3 tokens per second, then in total we'll need 84 million hours of EEG data. [2] We could probably decrease that a bit more with clever data efficiency tricks [3], but we'll still most likely need tens of millions of hours. This would involve—for example—collecting data from 10,000 people for 500 days. This is difficult, but it could be doable if you have a billion dollars to spend. If things continue as they are, an AGI lab could do it for a small portion of their total budget in a few years.
The result of such a training run would be a model that can autoregressively generate EEG data. Given how much we can already extract from EEG data, it's clear that there's a massive amount of information within it. It's quite possible, in fact, that much of the information constituting our conscious experience is held within EEG data. If so, a model that can generate EEG data would be essentially a general mind-uploader. The simulation might not encompass all experience, and it might be missing detail and coherence, but it would be a working mind upload. We could communicate with such simulated minds by translating from EEG input/output to downstream text or other channels. We could use the tools we've developed for neural networks to modify and merge human and machine minds. These capabilities make this form of mind uploading a compelling route to ascension for humanity. See Beren Millidge's BCIs and the ecosystem of modular minds:
If we can get BCI tech working, which seems feasible within a few decades, then it will actually be surprisingly straightforward to merge humans with DL systems. This will allow a lot of highly futuristic seeming abilities such as:
1.) The ability to directly interface with and communicate with DL systems with your mind. Integrate new modalities, enhance your sensory cortices with artificial trained ones. Interface with external sensors to incorporate them into your phenomenology. Instant internal language translation by adding language models trained on other languages.
2.) Matrix-like ability to ‘download’ packages of skills and knowledge into your brain. DL systems will increasingly already have this ability by default. There will exist finetunes and DL models which encode the knowledge for almost any skill which can just be directly integrated into your latent space.
3.) Direct ‘telepathic’ communication with both DL systems and other humans.
4.) Exact transfer of memories and phenomenological states between humans and DL systems
5.) Connecting animals to DL systems and to us; let’s us telepathically communicate with animals.
6.) Ultimately this will enable the creation of highly networked hive-mind systems/societies of combined DL and human brains.
Of course, in this sense, even language models (particularly base models) are something like mind-uploading for the collective unconscious. Insofar as text is the residue of thought, and thus predicting text involves predicting the mind of its author, even a model trained solely on text acts as a sort even-lower-fidelity mind upload. Few appreciate the consequences of this.
[1] Insofar as deep learning research is the bottleneck, we should focus on speech autoregression as a testing ground for approaches to EEG.
[2] This is a rather conservative estimate, in my view. 3-10 tokens/frames per second seems like the rough scale of human brain activity (see Madl et al). However, what might be far more important is information. 64 channels of EEG data—and all the conscious experience that encompasses—might contain far more information for training a model than a token of text. That is, we might expect models to learn quicker on EEG data because a single EEG token contains far more useful information than a single text token. It's for this same reason that multimodal text/image models spread the information contained in an image out over many tokens, rather than stuffing it in a single one.
[3] For instance, we could simultaneously train on some sort of real-time human behavior, e.g. speech, keystrokes, or screen recordings. (or even maybe web text data, though it's not real-time) All of these share massive amounts of information with EEG data: the best way to win at any sort of action prediction involves understanding the actor's mind. So some lessons from each modality might transfer to the other, reducing the amount of EEG data required.
Given the effectiveness of LLMs in generalizing past the training data, one might expect that all the ideas we might care to look for are in the latent space implied by existing ones.
If we analogized this to a high-dimensional space, this could mean that all useful ideas are within a convex hull produced by existing ones. This is obviously not quite true, since the ideas which make up the corners of that convex hull needed to be generated by somebody, and so there's probably more out there. There's another version which also could be true: that when you process enough existing ideas, you can generalize to a new set of ideas which are implied by your priors. I also find this idea a bit suspect, since it seems to also ignore the existence of the ideas which shaped your priors in the first place.
But we might ask the broader question: what does it look like to generalize from your set of current ideas to new ones?
I'd almost want to contest the structure of the question: you don't generalize directly from ideas to ideas, you have to accumulate evidence at some point.
The simplest possible way to get this evidence is to condition on other ideas: you generalize a little bit, expanding your range of possible thought, update on whether that was helpful, and if so, expand further. You can repeat this process as many times as you want to claim new, valid territory in ideaspace. I think something involving this sort of iterative expansion is necessary; you can't just derive new ideas from the ether. See Why Greatness Cannot Be Planned for more justification.
Alternatively, we could see ideas through the lens of search: by counting bits of uncertainty. Outside your knowledge, there is a vast sea of possible hypotheses which you have priors towards with various degrees of credence. When you either generalize from existing knowledge or gain new knowledge, those hypotheses lose uncertainty and some collapse into new knowledge. This doesn't happen all at once, your mind has to step through some sort of belief model to notice which implications have changed.
Related to the questions of what's happening in the space of possible ideas and the space of possible minds, Tsvi wrote a great set of research notes on whether there might exist "Cognitive Realms": "unbounded modes of thinking that are systemically, radically distinct from each other in relevant ways." These modes might involve differences in the shape of how an agent makes progress in ideas, deeply engrained enough that it's not really possible for them to "steal strategies" from another, differently structured agent.
I think there's work to be done in finding an art of thinking which mechanistically understands how progress in thought is made, and then effectively uses that to make quicker progress. Rationality aimed for this but was almost dead on arrival, due to what seem to be mostly social considerations. I suspect a further refinement of the art would focus more on having good feedback loops and broader structure of thought than on maintaining exact precision in how evidence is used. Rationality's focus on precision was important, to be sure: the thing that focus illuminated was something like epistemic necessity. But a good next step might be to figure out how to properly expand your domain of knowledge. And that requires a hell of a lot more than just Bayes' Theorem.
Today I talked to a friend who regularly uses Anki about the potential for using spaced repetition to learn good mental motions / thought patterns. This is especially appealing to me because I constantly feel like I have too little space in my head, that things are always falling out of my awareness. This happens on multiple levels: hour by hour, day by day, and week by week. (Maybe this is something like a mindset problem, though. I'd guess a Buddhist would say I'm messing something important up in my attitude towards thoughts and impermanence.) In particular, I read about many things that I think are correct, and many ways of thinking that I think are correct, but I don't tend to apply them very quickly. It takes much repeated exposure to something for it to become an actual habit of thought. And habits of thought are what I want: I want to be able to effectively apply ideas I've learned when I encounter the right contexts in real life.
If you were to do this with spaced repetition, it would have to involve doing a short drill of the mental motion in your mind on demand. Then again, this seems like not quite the right approach. To some degree it's that this problem is difficult (mostly in generating useful unique examples), but I'm not entirely sure. There's something missing: 'brute forcing' the problem like this feels like it's clearly not the way you actually succeed.
Maybe it's that 'habits' aren't really the right frame: you want to be operating from a good set of general principles. A set of principles that are habitual, but are so omnipresent that they are reinforced by everything you do. And then it's those principles that generate habits.
Or, maybe (and this is similar) the best way to practice habits like this is through something closer to end-to-end tests. Some end-to-end tests that people use in practice:
Still, it might remain helpful to drill certain patterns of thought, or at least set reminders for yourself. I'm still not really sure how to approach this. Currently, I have:
But, maybe let's think back to this problem of things 'falling off' the stack of my long-term working memory. I'm now also tracking a monthly 'list of things I want to do' separate from a todo list (the difference being that the 'things I want to do' are more intrinsically rewarding, self-directed, long-term helpful or short-term interesting). This includes reading, which is another thing that I feel drops off the stack very often and I'm not happy about. A core innovation here is that it's monthly - i.e. things can't accumulate on the list alone, I have to consciously decide to move them to the next month. It may be better to set this up to be weekly, if it grows too fast. I can also make a promise to myself that I will look at what I wanted to do in the past, which reduces my fear of dropping things. The goal might be to make a hierarchy, such that I know that if I left something behind at any level, I really did think it was less important than everything else.
Writing this daily blog is also helpful, since I know that if I wrote something about an idea, I have at least marked that point as reached, and in some sense 'done.' It's now navigable space; I've made a map of it.
The next thing I might like to add is structured reflection, perhaps in something like a weekly context. This blog might be good enough for thought, but it doesn't directly connect to my actions, and I'd like to be able to keep my agency on track as well. I tried to set this up once, but never even did a single weekly review. I think I could do better this time, but I'd also want to think carefully about what I'm trying to get out of it.
EVE Online is one of the only MMOs in existence which has high-quality economic systems, such that free trade is left unimpeded by the game's logic (and the system is well-designed enough that this doesn't break everything). The player-driven economy that emerges from it is fascinating. Read about it if you'd like. I'd like to build a game that focuses almost entirely on this sort of economic gameplay: a game where the primary goal is maximizing production, and people are forced to cooperate as part of a large economic system to achieve it.
The game would be broadly Factorio-like, but on a very large single-shard map. Players could purchase and trade freely definable plots of land, and build up farms, factories, and housing on those plots. These would be built in mostly automation-game-like ways, albeit with significantly more flexibility and freedom for optimization (everything having tradeoffs, of course). In order to keep this interesting, encourage cooperation, and disincentivize copy-pasting, the game's systems would be massive and sprawling. Rather than having a little over a hundred carefully crafted, unique recipes, like in a single-player experience, there would be tens of thousands of unique items, many of which are mostly mechanically similar. The primary aesthetic of the game's design would be maximalism.
Simulate a weather and climate system, to make location choice interesting and introduce variations in regional production. Keep transportation costs high to ensure regional variation in prices. Allow players to combine a vast array of parts to build different vehicles, to introduce new optimization in cost vs. complexity vs. performance vs. efficiency vs. etc. tradeoffs. Each of these systems would not need to be maximally polished: the polish and quality of experience comes from their sheer scale. Each system adds richness to the others by contributing to the massive optimization problem inherent to any economy. Not only would each system add directly to a single player's experience, by pushing them to more thoroughly consider how to design their production, but those considerations would show up in prices, which then shape the optimization of other players. For a very simple example, if iron is plentiful upstream from an important town, that will shape the factories players should build in that town!
Since comprehending (and optimizing for location, weather, etc.) the entire supply chain is a nearly impossible task, players would be pushed to rely on each other for crucial resources, intermediate products, and services (e.g. shipping, market making). The game could also place further constraints on how much a single player could act across the entire supply chain, such as requiring 'skills' to access certain technologies or limiting the total complexity of a player's automated vehicle programs.
The effect would be to produce an exciting and deeply interesting experience of participating in a massive dynamic system, which also functions as the most accurate simulation of an economic system ever produced.
I don't have a good excerpt at hand, but I've been reading Nikolai Fedorovich Fedorov's What was Man Created For? The Philosophy of the Common Task. Fedorov is one of the earliest transhumanist / futurist / cosmist authors, and I find his ideas compelling as early examples of patterns of thought similar to mine. One of his most out-of-consensus ideas is that we can and should resurrect the dead through scientific means.
One might immediately think to cryonics, which is looking increasingly promising, but Fedorov was considering resurrecting people who had the misfortune of being born before any sort of direct preservation technology. I see at least one route to achieving this: I wouldn't be surprised if a person could be resurrected from a large (or even maybe small) text corpus. There is a massive amount of information held in writing, and it's possible that it's enough to reconstruct a mind.
If you're aiming to achieve this for yourself, ideally you would preserve as much other information as possible as well. For example, obtain EEG data, keylogs, videos, conversations, stream-of-consciousness writing, and screen recordings / histories.
Aim to be prolific. Ideally, store information for the long term, packaged such that there's incentive to preserve and maintain it. The simplest ways to do this are cryptographic or practical: sign and duplicate your data. But information has been preserved far longer through purely social means. How well could we simulate Jesus' apostles today? Or the authors of the Buddhist sutras?
A sort of mind upload is already happening for everyone who exists in the text corpus processed by large language models. However, I'll warn that this sort of resurrection-by-Mu may not necessarily be a good thing for our future. But that topic deserves far more words than I can give it today, and perhaps should not be discussed where its discussion will be remembered.
Stabbing pain for the feeling
Now your wound's never healing
Til' you're numb, oh it's begun
Before we all become one
If you're not remembered, then you never existed.
About a year ago I realized that I had deeply misunderstood how I was best motivated to do things. From Nassim Nicholas Taleb in Antifragile:
My idea was to be rigorous in the open market. This made me focus on what an intelligent antistudent needed to be: an autodidact—or a person of knowledge compared to the students called “swallowers” in Lebanese dialect, those who “swallow school material” and whose knowledge is only derived from the curriculum. The edge, I realized, isn’t in the package of what was on the official program of the baccalaureate, which everyone knew with small variations multiplying into large discrepancies in grades, but exactly what lay outside it…
Again, I wasn’t exactly an autodidact, since I did get degrees; I was rather a barbell autodidact as I studied the exact minimum necessary to pass any exam, overshooting accidentally once in a while, and only getting in trouble a few times by undershooting. But I read voraciously, wholesale, initially in the humanities, later in mathematics and science, and now in history—outside a curriculum, away from the gym machine so to speak. I figured out that whatever I selected myself I could read with more depth and more breadth—there was a match to my curiosity. And I could take advantage of what people later pathologized as Attention Deficit Hyperactive Disorder (ADHD) by using natural stimulation as a main driver to scholarship. The enterprise needed to be totally effortless in order to be worthwhile. The minute I was bored with a book or a subject I moved to another one, instead of giving up on reading altogether—when you are limited to the school material and you get bored, you have a tendency to give up and do nothing or play hooky out of discouragement. The trick is to be bored with a specific book, rather than with the act of reading. So the number of pages absorbed could grow faster than otherwise. And you find gold, so to speak, effortlessly, just as in rational but undirected trial-and-error-based research. It is exactly like options, trial and error, not getting stuck, bifurcating when necessary but keeping a sense of broad freedom and opportunism. Trial and error is freedom.
(I confess I still use that method at the time of this writing. Avoidance of boredom is the only worthy mode of action. Life otherwise is not worth living.)
Try going to a library, looking through the shelves, and finding a book that captures your attention. Then start reading. Read only as long as you're interested. Then direct yourself according to your interest, within the possibilities afforded by the library. Don't look at electronics. I was surprised by what I found.
Motivation is caused by interest, usually from anticipating some combination of novelty (explore) and results (exploit). If you're not motivated, you might have no novel or important possible actions, likely because you're artificially constraining your action space. A way to solve this is to simply try sampling from the space of actually-possible actions, and picking the one that most interests you. Then iteratively chain or resample as soon as you get bored. Your intuitive sense of 'interestingness' is a surprisingly good way to steer. If you learn to trust yourself, you can find practically infinite motivation for exploration - although, at that point, 'motivation' isn't really the right word anymore.
Another mechanism to achieve a similar effect is to let yourself become bored by not allowing yourself to take any stereotyped or consequentialist actions. See Tsvi's Please Don’t Throw Your Mind Away:
- Your mind wants to play. Stopping your mind from playing is throwing your mind away.
- Please do not throw your mind away.
- Please do not tell other people to throw their mind away.
- Please do not subtly hint with oblique comments and body-language that people shouldn't have poured energy into things that you don't see the benefit of. This is in conflict with coordinating around reducing existential risk. How to deal with this conflict?
- If you don't know how to let your mind play, try going for a long walk. Don't look at electronics. Don't have managerial duties. Stare at objects without any particular goal. It may or may not happen that some thing jumps out as anomalous (not the same as other things), unexpected, interesting, beautiful, relatable. If so, keep staring, as long as you like. If you found yourself staring for a while, you may have felt a shift. For example, I sometimes notice a shift from a background that was relatively hazy/deadened/dull, to a sense that my surroundings are sharp/fertile/curious. If you felt some shift, you could later compare how you engage with more serious things, to notice when and whether your engagement takes on a similar quality.
See the post for more justification. Especially: "Highly theoretical justifications for having fun"
Though these two excerpts have different aesthetics, they both point in about the same direction. Try looking for what actually interests you. Try playing, if you haven't in a while.
I've found it very helpful to use writing as a tool for thought. Writing forces you to actually think instead of one-shot evaluating, augments your working memory, and lets you evaluate your thoughts as a whole.
When you're using an LLM, the best way to get intelligent output is some sort of chain-of-thought tree search. See Tree of Thoughts, AutoLoom/Jacquard, MiniHF, and the recent math olympiad MCTS results. Essentially, each of these techniques has the model generate a number of possible continuations, and then evaluate which of those continuations is most promising. MCTS is a more sophisticated algorithm for doing essentially the same thing: iterative search over the space of possible continuations. This sort of prompting augments the model's thinking capacity by both: a) letting it work step-by-step, rather than in a single forward pass, and b) letting it leverage its capacity for noticing good thinking to steer itself towards good paths in thoughtspace. It's usually much easier to notice when something is correct or well-thought-through than to come up with it yourself, and the same applies for your own writing. When you write, you'll notice which ways of paths are best and steer towards them in a way that's much more difficult to implement using only your inner monologue.
It generally seems like many of the most effective people throughout history spent a lot of time reading and writing. Newton likely wrote millions of words in his lifetime, Franklin was a prolific writer, and Darwin famously kept an in-depth scientific journal. This definitely isn't true of everybody who did great things, but it's likely true of many of them.
Here's a haphazard list of places writing might be helpful:
Also, try to as much as possible write to a mind you trust. Your thinking will be better if you know your intended audience will follow you when you light the way.
I'm realizing that the thing happening to this blog is exactly what DRMacIver described:
The easiest way to have this is by writing too many good posts! It’s very easy to let your standards creep too high, and you won’t be able to sustain that indefinitely. It’s good to write good posts from time to time, but you’re not going to be able to do it every day.
My recommendation is that to counteract this you should deliberately half-arse your posts at least a couple days per week. Pick a prompt and do the bare minimum amount of work needed to fulfill it.
The posts I've been writing for the past few days take a lot of time, and when I started one last night I wrote 500 words and didn't even get past the introduction. I think that's good enough reason to skip, since otherwise I'd just split the longer essay into two parts, one post each, which I don't like the aesthetics of. But I just generally think that it's not that sustainable to push myself to write a full 400-900 word post every day, when I also need to sleep in order to make the next day go well. (The posts usually take from 15 minutes to an hour to write - I do no editing aside from what I normally do while writing.) Still, I kind of like the idea of writing an average of ~500 words a day, since that means that every 3 months I've produced about a book's worth of writing.
Okay, so I just wrote all that and it turns out I kind of just write a lot very easily when let on my own. I think I still haven't fully updated on how easy it is to just write, so I've felt an undue ugh-field around finishing the aforementioned half-written post. But, okay, let's stop the stream-of-consciousness writing and get to the short thing I was actually wanting to write about:
Normies assume high-voltage agents don't exist, while schizos assume high voltage actors are common. But really, both are wrong: high-voltage actors are rare but real. Nearly all people and systems act as low-voltage agents nearly all of the time, but high-voltage agents can exist.
"High voltage" as a metaphor implies something about what makes someone a 'high-voltage agent.' From what I understand, in electromagnetism, high voltage implies a powerful electric field, which causes very significant electron movement following it. The local gradient of the potential landscape (i.e. the electric field) being extremely strong causes the electrons to ignore typical constraints on their motion. So a 'high voltage' metaphor implies that some sort of large energy difference is causing uncharacteristically high motivation and agency on a local level, allowing the overall behavior to completely ignore typical constraints.
Okay, so it turns out I did accidentally write nearly 500 words, once again. And then I spent an hour talking to Claude about hyperstition. Oops. I'll leave this post at that, and catch you next time.
There's been a bit of recent criticism of 'TESCREAL' from a degrowth perspective, particularly by Emile Torres. I'm less interested in Elmile in particular's thoughts on the matter, but they recommended a graphic novel called Who's Afraid of Degrowth? which I read part of and found interesting. It has the sort of reasoning that looks sound if you squint: the sort you put in an essay you don’t really believe in the conclusion of. E.g. “not the only factor” is transformed into “not an important factor” in an inference from a study. It's possible that I have a higher bar for argumentation than most, but I also think a high bar is generally correct.
It also feels like the book is written in a world where you can basically freely choose the exact way you want an economic system to work. It specifies what the system would do, but not how that happens. It tends to reference indigenous practice, implying something like that structure would work well. I'm really uncertain about the ability for small, local coordination mechanisms to scale to a global population of at least millions. I like the attempt to build something impossible, though, and do hope it can succeed.
Even if the work isn't argued well, we can use it to understand the author's ideas.
It's instructive to read Nick Land to get at least a basic grip on some of the larger forces that "growth" related political movements might be talking about.
Machinic desire can seem a little inhuman, as it rips up political cultures, deletes traditions, dissolves subjectivities, and hacks through security apparatuses, tracking a soulless tropism to zero control. This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.
― Nick Land, Fanged Noumena
According to Land, we've accidentally constructed a misaligned superintelligence that we've constructed for ourselves. If let to continue, the selection pressures latent in capitalism, evolution, or almost any process will push towards something like entropy-maximization. Absolutely grabbing absolute power for its own sake.
(Well, that's not the whole story. But just give me a second...)
The core argument of degrowth, as stated in roughly their own terms, is that: Capitalism is out of our control and is destroying all that we value. We need to wrench back control of the future by halting it at all costs. This should sound familiar. Degrowthers essentially see what Nick Land is seeing. They see only a partial view of it, through the lens of environmentalism, political alienation, and economic inequality, but they do see it.
The degrowthers' argument initially struck me as surprisingly reasonable. The way you succeed in wielding an unaligned superintelligence is by using it to quickly seize as much value as you can and then either: a) decoupling your mechanism for growth from your mechanism for value, or b) stopping growth and sharply turning into the sort of society which maintains its values over time. And so the highly compressed degrowther thesis is that we should do (b) right now, since the misaligned superintelligence is already very scary and destructive, and things will only get worse. [1]
This isn't exactly desirable, since it effectively gives up on capturing almost all possible value in the universe. It initially looks like the only path to recover values, to preserve the human mesaoptimizer through massive growth, is to build some sort of singleton. Being unitary, the mind would not be subject to the sorts of internal selection pressures currently pushing humanity towards the kill/consume/multiply/conquer basin. [2] Ideally, we could define a mechanism for search (i.e. agency and epistemics) which is agnostic to possible values. It's unclear if this is actually possible, since mind-structure may be highly related to values.
But throughout all of this, I've been describing selection as an inevitable consequence of natural processes. But we are not nature, and if I remember correctly, we're trying to defeat nature and its injustices. After all, all of this - including Nick Land's 'alien god' is essentially our doing: yes, the misaligned superintelligence from the future arises from convergent incentives, but we can always choose to do something different. Ideally, we would use timeless decision theories to recover singleton-like agency from distributed action. That is, we could simply choose (via a timeless decision theory) to coordinate in such a way that our values are propagated indefinitely far into the future.
(Um, maybe. I'm not totally sure this works out correctly, and I don't have a fully rigorous view of it. This feels almost as vague as Scott Alexander's ending to Meditations on Moloch. But I think timeless decision theory represents a real divergence from Nick Land's worldview, which is maybe directly, naturally derived as the final consequence of a universe where only causal decision theory works.)
Degrowth is sort of grasping towards the hope I'm presenting here, with its ideas of universal cooperation and participation, but it simultaneously has a complete lack of hope regarding higher possibilities. I'm not entirely sure why this is, I think it might vary between:
...but still, if we can cooperate across all of society to bring about our values, why not use that same energy to defeat death? Or grant ourselves the power to upend the unjust natural order?
[1] I’m not even sure degrowth actually succeeds at b. In particular, there are more subtle selection effects (like biological and cultural evolution) which will still push the world towards misalignment with our current values.
[2] This is dubious. In particular, I'd be especially suspicious of selection effects not applying under self-improvement. If we do in fact separate out agency and values, we could likely keep values constant while improving agency, but again I find such a separation scheme unlikely to work. Also, defining a static value function right now without updating it as intelligence scales seems potentially very scary. (Maybe not! I think it's quite possible that our values are cleanly specified right now, but I'm very worried about locking in misspecified ones.) We would likely want our agent to be able to grow and improve with time, but there doesn't seem to be a way to do that without giving Nick Land's god room to tip the scales. (But also, maybe that's what we want? I remain confused on this topic.)
The principle is this: Maximize the amount of information you gain, and minimize the amount of information your opponent does.
Actions bring information. Speeding up your actions also speeds up your information gain, and vice versa for your opponent.
Disrupt your opponent's information-processing systems. Specifically: confuse them with contradictions, overwhelm them, divide them, and move faster than their OODA loop can follow.
Ensure your information-processing systems are resilient and robust to disruption.
Attack when your enemy is in the dark, is doubting, is hesitating.
Manipulate your enemy's attention. Use sleight of hand or shock and awe.
Ensure that every move the enemy makes leaks information to you.
Examples to consider: Blitzkrieg (see Col. John Boyd's presentations), Enigma machine, Operation Bodyguard, 2016 election (see Dominic Cummings' talk), most interesting historical battles (e.g. the Battle of Cannae), small-scale infantry tactics.
On this video by Phanimations, keeping my own experience in mind but not referring to it directly.
There is a crucial difference between problems that behave like homework and those that behave like research. A homework problem has a precise setup and well-specified conditions for success and failure which leak many bits about the solution space. Even if the setup isn't entirely precise or conditions well-specified, they were created by a human, which greatly constrains possibility-space. A research problem, on the other hand, has no such things.
We might say that the fundamental difference between a homework problem and a research problem in this dichotomy is that a homework problem has a prior for which answers are correct, and which are incorrect. Even if you can't backpropagate from that prior to the step-by-step-solution-space, you have something to measure your step-by-step solution's progress against. This makes homework problems much easier, as a class. (Interestingly, this implies that difficulty in evaluation is core to agentic work - which matches my experience with LLM agents.)
Almost all single-player video games are about homework problems. Some exceptions are parts of The Witness, Minecraft, and Factorio. Improving at most multi-player video games is a research problem.
Ideally, you should seek to turn research problems into homework problems. Have something to measure your work against that is as close to individual steps as possible. It's especially helpful if this can tell you whether you might be far off from the solution (requiring rethinking deep into the chain of logic, +explore) or nearby (requiring further progress past this point, +exploit). Some priors that seem useful:
U.S.-China Rivalry in a Neomedieval World
the future of the U.S.-China rivalry will bear little resemblance to the titanic struggles of the past two centuries. U.S.-China peacetime competition appears headed to unfold under conditions featuring a high degree of international disorder, diminishing state legitimacy and capacity, pervasive and acute domestic challenges, and severe constraints imposed by economic and social factors that are vastly different from those that industrial nation-states experienced in the 19th and 20th centuries.
[…] As this report will show, U.S.-China rivalry is likely to be more profoundly shaped by a general attenuation or regression of key political, societal, economic, and security-related aspects of modern life.
[…] New technologies, values, and ideals will emerge and change societies in ways that have no historical precedent. But the impact of modern and postmodern change is likely to pale next to the overall trajectory toward regression. Accordingly, the neomedieval era will be characterized by a blend of some modern features but, more profoundly, by the resurgence of preindustrial features
My expectation is that anything like what RAND describes here eventually leads to something like civilizational collapse absent a singularity of some kind.
Their analysis also broadly agrees with my biases and some recent research I've done that I haven't written about here yet.
Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness shows:
A friend asked me how to know how you feel about things, how you know what you want. I didn't exactly provide a satisfying, practical answer to their question, but it might be helpful to read. Heavily inspired by Ziz (see her glossary entry on Core) and similar sets of ideas, probably convergent with many ideas from psychoanalysis and other therapy frameworks.
Edit, September 5, 2025: Coherence therapy, especially what Joe Hudson practices, is probably the best way to think about this. Chris Lakin also has writing on this. Also, note, Ziz was not the originator of these ideas even within her canon, I just read about them there.
There is something that drives you to do things, some reason why you do not do nothing. So you do want things, there is some part of your brain which chooses one action over another.
I also have to assume that you are not completely a bundle of social expectations and traumas and etc.s, that you have some part of your brain that wants specific things such that you will take at least some actions independently of your environment. I have very little idea of for how many people such a thing exists - I have reason to believe that anywhere from 100% to 5% of people have it. (I'm also not entirely sure how to clearly divide these two sorts of causes for action, since it seems like this wanting part of your mind is self-reinforcing, changes itself based on your past actions. Something something hippocampus sleep credit assignment.)
I think (from first principles) this should reasonably straightforwardly tell you what you want, since doing so is helpful for getting what you want. However this doesn't seem to actually be the case for most people.
At least one possible explanation is that it is legitimately helpful to hide what you want from yourself in many situations. This shows up most strongly with fawners - best way to do what others want is to think you want the same things - best way to hide what you want from others is to hide it from yourself.
Another possible explanation is that it is legitimately hard to know what you want. I think this definitely makes sense over the long term - I think some people might naturally have high vs. low discount rates i.e. be short-sighted or be long-term planners. Maybe this all just comes down to being good at simulating yourself in the future. These might also just be legitimately different agent strategies - in some situations having a clear picture of the future is very helpful, in some situations it's basically useless (and in fact stops you from taking correct actions in the moment).
But it's a bit weird to fully not know how you feel about things in the short term when not intentionally obfuscating it. I just then immediately have the question of: why do you choose one thing over another?
In particular the way I would disambiguate this from obfuscation / social expectations / trauma / etc. is to give yourself a situation where you have freedom to do what you want outside of internal/external judgement, where you actually be okay no matter what you do and then notice what you do.
Alternatively, if that's too much (it probably will be for people who struggle greatly with lack of self-knowledge due to obfuscation), give yourself a situation where you have even a slight freedom to indicate what you want without internal/external judgement (then react positively to that to build trust and feel freer, or update on that information).
Some descriptions of ways to get out of completely-obfuscated-wants / related problems:
The current iteration of this blog is approximately the minimum possible that feels "real" to publish — a collection of Markdown files compiled with Zola into a static site for Cloudflare Pages. It should evolve with time, but this approximately fully meets my needs, albeit inflexibly. For ease of reading by both humans and language models, I would like to have:
For the moment those are lower priority than actual writing, but they will exist at some point soon.
I'Il also backport some of my old writing from Cosmic Echo, to make this a bit more centralized and well-seeded.
Anyways, I'd like to publish about 1 post per day here. I may set a Beeminder for this (as I'm trialing it right now for other things) - but I expect the best returns will be from sticking to writing even a very small, unfocused amount here each day.
Ideally, this will also be a living site. Daily writings might be leaves which don't change much, but I might like aggregators which gradually accumulate content with time. I'm not sure how to organize this properly (two top-level sections seems fine, but permits little flexibility), but that's not much of a concern for the moment.
Currently this blog is shared nowhere, but I expect to gradually share it as its quality improves.
Spent a long time as middle/high-schooler trying to figure out how to create very natural [[evolution]]/DNA in a simulated environment. I.e. an environment where the agents' genetic codes and inner workings are relatively mundane parts of the environment, but still can undergo substantial speciation and evolution. Like those old [[carykh]] videos, etc. but 'real'. No one has managed to do it since, to my understanding. Why?
Very often I read things on the internet, and very often it's in the rough format you're reading right now—a small personal website with essay-length thoughts on anything and everything. Here are my favorites.