Posts
Comments
Even just writing down loose associations and your emotional state is enough; that's how you get the ball rolling. Try it for two weeks even if it feels useless. Unless you're taking antidepressants in which case this might actually be ineffective. I know this doesn't sound worthwhile, but I know from experience (mine and others) that it usually works.
That's common for beginners. If you want to give this a go, you should start by writing down fleeting, vague associations. "Something a bit sad or disappointing. A car. School and also not school. The texture of cinnamon rolls."
It doesn't matter that you can't remember anything concrete at first. Eventually, you'll remember more and more.
I don't agree with any of this. When I was really into lucid dreaming, I discovered that the best approach is two-fold: keep a detailed dream journal, and make a habit of performing reality checks. That's it. If you don't keep a dream journal, you'll likely have lucid dreams and just ... forget about them. And as for reality checks, my preferred one is trying to push my thumb through my palm. You can do it casually anywhere and it's an instant confirmation.
When I was actively trying to induce them, I often had periods where I had several lucid dreams per night. Gradually, I mastered dream flight (it's so weird how it's a skill), and I became better able to maintain my lucid state, which is often the most tricky part.
I was never interested in erotic, oneironautic adventures. I spent most of my time flying, which doesn't really get old.
This comment is going to sound mean. Just a fair warning.
This strikes me as a classic case of a guy thinking he's a prophet after doing a bunch of psychedelics. I've seen it over and over again. They are so convinced that they've "got it" that they often manage to convince others they do as well. You could call it the Messiah complex because, well, duh.
And you know what? Being around a bunch of people who are really nice to you feels good. And that feeling of it "clicking" is the feeling of your cognitive dissonance being wiped out by highly motivated reasoning. They're a bunch of loons, but I feel like I belong with them. I'm a genius, so if I belong with them they must be a bunch of geniuses as well. Oh! We're a bunch of geniuses! The Weird Spiritual Teachings are true!
Many incredibly smart scientists in Japan joined a doomsday cult (Aum Shinrikyo) because its members made them feel like they finally belonged somewhere. Loneliness is a hell of a drug. It's what gets you sucked into cults.
From what I've read, integral theory seems to be closer to a mysticist cult than a scientific framework. And I say this as someone who is quite open to process philosophy and systems science, both of which seem vaguely related to whatever integral theory is trying to be.
Not all who wander are lost.
I believe that the inner sense you are talking about is what we call love. We see the beauty around us, and we want to protect it. There are potential paths in front of us. There is a path whereby life is destroyed. There is a path whereby it is saved. Our mission is to keep it on the safe path, so that future generations can continue our mission when we are gone. We do this out of love. As we come to see that every living thing on earth depends on each other, our love grows so that it can embrace it all.
This is why we are willing to make sacrifices: what we are protecting is greater than all of us. Our life gains meaning and purpose when we find that it aligns with this mission.
We plant seeds today so that coming generations may enjoy the shade. That is our love.
All analogies rely on isomorphisms. They simply refer to shared patterns. A good analogy captures many structural regularities that are shared between two different things while a bad one captures only a few.
The field of complex adaptive systems (CADs) is dedicated to the study of structural regularities between various systems operating under similar constraints. Ant colony optimization and simulated annealing can be used to solve an extremely wide range of problems because there are many structural regularities to CADs.
I worry that a myopic focus will result in a lot of time wasted on lines of inquiry that have parallels in a number of different fields. If we accept that the problem of inner alignment can be formalized, it would be very surprising to find that the problem is unique in the sense that it has no parallels in nature. Especially considering the obvious general analogy to the problem of cancer which may or may not provide insight to the alignment problem.
What I have to offer is yet another informal perspective, but one that may further the search for formal approaches. The structure of the inner alignment problem is isomorphic to the problem of cancer. Cancer can be considered a state in which a cell employs a strategy which is not aligned with that of the organism or organ of which it belongs. One might expect, then, that advances in cancer research will offer solutions which can be translated in terms of AI alignment. In order for this to work, one would have to construct a dictionary to facilitate the process.
A major benefit of this approach would be the ability to leverage the efforts of some of the greatest scientists of our time working on solving a problem that is considered to be of high priority. Cancer research gets massive funding. Alignment research does not. If the problem structure is at least partly isomorphic, translation should be both possible and beneficial.
That only works if you reject determinism. If the initial conditions of the universe resulted in your decision by necessity, then it's not your decision, is it?
Moral realism:
I think determinism qualifies. Morality implies right versus wrong which implies the existence of errors. If everything is predetermined according to initial conditions, the concept of error becomes meaningless. You can't correct your behavior any more than an atom on Mars can; que sera, sera. Everything becomes the consequence of the initial conditions of the universe at large and so morality becomes inconsequential. You can't even change your mind on this topic because the only change possible is that dictated by initial conditions. If you imagine that you can, you do so because of the causal chain of events that necessitated it.
There's no rationality or irrationality either because these concepts imply, once again, the possibility of errors in a universe that can't err.
You're an atheist? Not your choice. You're a theist? Not your choice. You disagree with this sentiment? Again; que sera, sera.
How can moral realism be defended in a universe where no one is responsible for anything?
I do indeed make myself laugh at times. I think it has something to do with depth. The consequence of a line of thinking can be surprising, and that's probably relevant.
That's an interesting way of looking at it. Feynman had a hunch on the topic, which he shared in his Nobel Prize speech: nature is simple in some sense. We can describe things in many different ways without knowing that we're describing the same thing. Which, he said, is a sort of simplicity.
I absolutely agree. It's a new way of looking at life.
I didn't anticipate that anyone might think I meant that holism itself was a novel idea, biological or otherwise. To clarify: it's not. It has a long history. But most modern versions can be traced back to the Manhattan Project, which some would consider surprising. Recently, the same basic version of biological holism has popped up in several different fields. This convergence, or consilience if you will, is interesting if only because consilience in science is interesting in general.
The historical background was provided as context for those who might need it. I've heard people say here before that holism is something of a collective blindspot. So why not bring it up for discussion?
Daniel Dennett and Michael Levin have been promoting this view recently. Friston's free energy principle is also a recent development. Flack et al's notion of the information hierarchy is also new. So is Morowitz and Smith's work on energy flow. And the England group's work on dissipation-driven organization. Big-picture syntheses that fit conveniently into this framework such as The Systems View of Life, Dance to the Tune of Life, Experiences in the Biocontinuum, and Incomplete Nature have all made recent appearances. Some might find it interesting to consider all of this in terms of the difficulties in making sense of nonlinear systems, which is why I framed it as such. It's an old problem with novel developments.
That's a perfectly reasonable concern. Details keep you tethered to reality. If a model disagrees with experiment, it's wrong.
Personally, I see much promise in this perspective. I believe we'll see many interesting medical interventions in the coming decades inspired by this view in general.
You're right that it probably makes more sense to think of it as a perspective rather than a paradigm. Yet, I imagine that some useful fundamental assumptions may be made in the near future that could change that. If nothing else, a shared language to facilitate the transfer of relevant information across disciplines would be nice. And category theory seems like an interesting candidate in that regard.
I disagree with what you said about optimization, though. Phylogenetic and ontogenetic adaptations both result from a process that can be thought of as optimization. Sudden environmental changes don't propagate back to the past and render this process void; they serve as novel constraints on adaptation.
If I optimize a model on dataset X, its failure to demonstrate optimal behavior on dataset Y doesn't mean the model was never optimized. And it doesn't imply that the model is no longer being optimized.
To use the example mentioned by Yudkowsky in his post on the topic: ice cream serves as superstimuli because it resulted from a process that can be considered optimization. On an evolutionary timeline, we are but a blip. As in the case of dataset Y, the observation that preferences resulting from an evolutionary process can prove maladaptive when constraints change doesn't mean there's no optimization at works.
My larger argument (that I now see I failed to communicate properly) was that the appearance of optimization ultimately derives from chance and and necessity. A photon doesn't literally perform optimization. But it's useful to us to pretend as if it does. Likewise, an individual organism doesn't literally perform optimization either. But it's useful to pretend as if it does if we want to predict its behavior. Let's say that I offered you $1,000 for waving your hand at me. By assuming that you are an individual that would want money, I could make the prediction that you would do so. This is, of course, a trivial assumption. But it rests on the assumption that you are an agent with goals. This comes naturally to us. And that's precisely my point. That's Dennett's intentional stance.
I'm not sure I'm getting my point across exactly, but I want to emphasize that I see no apparent conflict between the content in my post above and the idea of treating individual organisms as adaptation executers (except if the implication is that phylogenetic learning is qualitatively different from ontogenetic learning).