Posts
Comments
There's a type of person that feels this zest, and this type is not a majority. The median person on Earth is confused by the world. They believe in things like Jesus Christ, and they press on in hope that adhering to divine guidance while they attempt to survive the trials and tribulations of life will be rewarded with not having to do this again. To such a person, the sight of two metal meteors descending from the sky with loud sonic booms, igniting engines and landing in synchrony does not necessarily inspire awe or enthusiasm as much as confusion and terror.
Were there really a lot of people in whom the SpaceX launch and the landing of the boosters inspired confusion and terror? I have not seen any of that. The reactions that I have observed have ranged all the way from disinterest to (as you put it) a palpable zest, but I have not observed anyone who felt terror or confusion.
I would be very surprised to find that a universe whose particles are arranged to maximize objective good would also contain unpaired sadists and masochists.
The problem is that neither you nor BrianPansky has proposed a viable objective standard for goodness. BrianPansky said that good is that which satisfies desires, but proposed no objective method for mediating conflicting desires. And here you said “Do remember that your thoughts and preference on ethics are themselves an arrangement of particles to be solved” but proposed no way for resolving conflicts between different people’s ethical preferences. Even if satisfying desires were an otherwise reasonable standard for goodness, it is not an objective standard, since different people may have different desires. Similarly, different people may have different ethical preferences, so an individual’s ethical preference would not be an objective standard either, even if it were otherwise a reasonable standard.
You seem to be asking a question of the form, "But if we take all the evil out of the universe, what about evil?"
No, I am not asking that. I am pointing out that neither your standard nor BrianPansky’s standard is objective. Therefore neither can be used to determine what would constitute an objectively maximally good universe nor could either be used to take all evil out of the universe, nor even to objectively identify evil.
On the other hand, maybe you should force them to endure the guilt, because maybe then they will be motivated to research why the agent who made the decision chose TORTURE, and so the end result will be some people learning some decision theory / critical thinking...
The argument that 50 years of torture of one person is preferable to 3^^^3 people suffering dust specs presumes utilitarianism. A non-utilitarian will not necessarily prefer torture to dust specs even if his/her critical thinking skills are up to par.
There is no democracy in the US
No democracy, really? Or would it be more accurate to say that US democracy falls short of some sort theoretical ideal?
Yep, I agree. The second sentence of this comment's grandparent was intended to support that conclusion, but my wording was sloppily ambiguous. I made a minor edit to it to (hopefully) remove the ambiguity.
Yep. This could be because Nick Bostrom's original simulation argument focuses on ancestor simulations, which pretty much implies that the simulating and simulated worlds are similar. However here, in question 11, Bostrom explains why he focused on ancestor simulations and states that the argument could be generalized to include simulations of worlds that are very different from the simulating world.
Interesting paper. But, contrary to the popular summary in the first link, it really only shows that simulations of certain quantum phenomena are impossible using classical computers (specifically, using the Quantum Monte Carlo method). But this is not really surprising - one area where quantum computers show much promise is in simulating quantum systems that are too difficult to simulate classically.
So, if the authors are right, we might still be living in a computer simulation, but it would have to be one running on a quantum computer.
Thanks - I enjoyed the story. It was short but prescient. The article that inspired it was interesting as well.
I'm a two-boxer. My rationale is:
As originally formulated by Nozick, Omega is not necessarily omniscient and does not necessarily have anything like divine foreknowledge. All that is said about this is that you have "enormous confidence" in Omega's power to predict your choices, and that this being has "often correctly predicted your choices in the past (and has never, as far as you know made an incorrect prediction about your choices)", and that the being has "often correctly predicted the choices of other people, many who are similar to you". So, all I really know about Omega is that it has a really good track record.
So, nothing in Nozick rules out the possibility of the outcome "b" or "c" listed above.
At the time that you make your choice, Omega has already irrevocably either put $1M in box 2 or put nothing in box 2
If Omega has put $1M in box 2, your payoff will be $1M if you 1-box or 1.001M if you 2-box.
If Omega has put nothing in box 2, your payoff will be $0 if you 1-box or $1K if you 2-box.
So, whatever Omega has already done, you are better off 2-boxing. And, your choice now cannot change what Omega has already done.
So, you are better off 2-boxing.
So, basically, I agree with your assessment that "two-boxers believe that all 4 are possible" (or at least I believe that all 4 are possible). Why do I believe that all 4 are possible? Because nothing in the problem statement says otherwise.
ETA:
Also, I agree with your assessment that "one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)". But, in thinking this way, one-boxers are reading something into the problem beyond what is actually stated or implied by Nozick.
Yep.
And, in the Maps of Meaning lecture series, Peterson gives a shout-out to Rowling's Harry Potter series as being an excellent example of a retelling of an archetypal myth. So, it was a good choice of material for Yudkowsky to use as he did.
Using mythology to illustrate philosophical points has a lengthy tradition prior to Sartre. Achilles would have been a mythological figure by the time Zeno of Elea demonstrated the impossibility of motion by imagining a race between Achilles and a tortoise. And, in Phaedrus, Plato imagines a conversation between Thoth (from Egyptian mythology) and the Egyptian king Thamus to make a point about literacy.
Congratulations!
Which story is yours? (The link just points to the home page.)
I have taken the survey.
Which of Rossin's statements was your "Cotard delusion" link intended to address? It does seem to rebut the statement that "nothing I could experience could convince me that I do not exist", since experiencing the psychiatric condition mentioned in the link could presumably cause Rossin to believe that he/she does not exist.
However, the link does nothing to counter the overall message of Rossin's post which is (it seems to me) that "I think, therefore I am" is a compelling argument for one's own existence.
BTW, I agree with the general notion that from a Bayesian standpoint, one should not assign p=1 to anything, not even to "I exist". However, the fact of a mental condition like the one described in your link does nothing (IMO) to reduce the effectiveness of the "I think, therefore I am" argument.
Well, I guess I won't be complaining about my neighbor's lawn flamingos any more after reading that!
Much smaller numbers, popular now, still demands huge melting we don't see really
Perhaps, but:
If the global temperature continues to rise over the next century, then the rate of melting will be higher at the end of the 100 year period than it is now
In addition to Antarctica, Greenland has a significant (~ 2,850,000 km3) ice sheet. Melting of the Greenland ice sheet will also contribute to sea level increases
OK, noted, and thanks. I haven't actually read An Inconvenient Truth.
But, I think most current scientific estimates are lower, so "reigning supreme above all the sciences" still seems a bit hyperbolic.
But mostly, I love how the arithmetic is reigning supreme above all the sciences.
This was a good puzzle, but I don't see how it follows from the puzzle that arithmetic is "reigning supreme" above all the sciences. For one thing, I thought that most scientific estimates of sea level rise over the next 100 years were a lot lower than 6 meters. Do you have any links to projections of 6 meters?
The definition of "blather" that I find is:
"talk long-windedly without making very much sense", which does not sound like Thomas's comment.
What definition are you using?
Thomas's comment seems quite sensible to me.
It seems to me that Dyson's argument was that as temperature falls, so does the energy required for computing. So, the point in time when we run out of available energy to compute diverges. But, Thomas reasonably points out (I think - correct me if I am misrepresenting you Thomas) that as temperature falls and the energy used for computing falls, so does the speed of computation, and so the amount of computation that can be performed converges, even if we were to compute forever.
Also, isn't Thomas correct that Planck's constant puts an absolute minimum on the amount of energy required for computation?
These seem like perfectly reasonable responses to Dyson's comments. What am I missing?
Wouldn't that be question begging?
Did you mean, "at present subjective"? Because if something is objectively measurable then it is objective. Are these things both subjective and objective?
To clarify, consciousness is a subjective experience, or more precisely it is the ability to have (subjective) first person experiences. Beliefs are similarly "in the head of the believer". Whether either of these things will be measurable/detectable by an outside observer in the future is an open question.
Are those different experiences or different words for the same thing? What would it feel like to be self-aware without having first person experiences or vice versa?
Interesting questions. It seems to me that self awareness is a first person experience, so I am doubtful that you could have self awareness without the ability to have first person experiences. I don't think that they are different words for the same thing though - I suspect that there are first-person experiences other than self awareness. I don't see how my argument or yours depends on whether or not first-person experiences and self-awareness are the same; do you ask the questions for any particular reason, or did you just find them to be interesting questions?
What makes you think that? Surely this belief would be a memory and memories are physically stored in the brain, right?
To clarify: at the present you can't obtain a person's beliefs by measurement, just as at the present we have no objective test for consciousness in entities with a physiology significantly different from our own. These things are subjective but not unreal.
Those sound like synonyms, not in any way more precise than the word "consciousness" itself.
And yet I know that I have first person experiences and I know that I am self-aware via direct experience. Other people likewise know these things about themselves via direct experience. And it is possible to discuss these things based on that common understanding. So, there is no reason to stop using the word "consciousness".
If a thing is "impossible to measure", then the thing is likely bullshit.
In the case of consciousness, we are talking about subjective experience. I don't think that the fact that we can't measure it makes it bullshit. For another example, you might wonder whether I have a belief as to whether P=NP, and if so, what that belief is. You can't get the answer to either of those things via measurement, but I don't think that they are bullshit questions (albeit they are not particularly useful questions).
What understanding exactly? Besides "I'm conscious" and "rocks aren't conscious", what is it that you understand about consciousness?
In brief, my understanding of consciousness is that it is the ability to have self-awareness and first-person experiences.
That it is difficult or impossible for an observer to know whether an entity with a physiology significantly different from the observer's is conscious is not really in question - pretty much everyone on this thread has said that. It doesn't follow that I should drop the term or a "use another label"; there is a common understanding of the term "conscious" that makes it useful even if we can't know whether "X is conscious" is true in many cases.
You observed something interesting happening in your brain, you labeled it "consciousness". You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans "conscious".
Yes, that sounds about right, with the caveat that I would say that other humans are almost certainly conscious. Obviously there are people (e.g. solipsists) who don't think that conscious minds other than their own exist.
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it "not conscious".
That sounds approximately right, albeit it is not just the fact that a rock is dissimilar to me that leads me to believe it to be unconscious. I am open to the possibility that entities very different from myself might be conscious.
Then you observed a robot, and you asked "is it conscious?". If you asked the full question - "are the things happening in a robot similar to the things happening in my brain" - it would be obvious that you won't get a yes/no answer. They're similar in some ways and different in others.
I'm not sure that "is the robot conscious" is really equivalent to "are the things happening in a robot similar to the things happening in my brain". It could be that some things happening in the robot's brain are similar in some ways to some things happening in my brain, but the specific things that are similar might have little or nothing to do with consciousness. Moreover, even if a robot's brain used mechanisms that are very different from those used by my own brain, this would not mean that the robot is necessarily not conscious. That is what makes the consciousness question difficult - we don't have an objective way of detecting it in others, particularly in others whose physiology differs significantly from our own. Note that this does not make consciousness unreal, however.
I would be willing to answer "no" to the "is the robot conscious" question for any current robot that I have seen or even read about. But, that is not to say that no robot will ever be conscious.I do agree that there could be varying degrees of consciousness (rather than a yes/no answer), e.g. I suspect that animals have varying degrees of consciousness, e.g. non-human apes a fairly high degree, ants a low or zero degree, etc.
I don't see why any of this would lead to the conclusion that consciousness or pain are not real phenomena.
It would be preferable to find consciousness in the real world.
I find myself to be conscious every day. I don't understand what you find "unreal" about direct experience.
You may feel that pain is special, and that if we recognize a robot which says "ouch" when pushed, to feel pain, that would be in some sense bad. But it wouldn't. We already recognize that different agents can have equally valid experiences of pain, that aren't equally important to us (e.g. torturing rats vs humans. or foreigners vs family).
I don't see how it follows from the fact that foreigners and animals feel pain that it is reasonable to recognize that a robot that is programmed to say "ouch" when pushed feels pain. Can you clarify that inference?
suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution
I don't see anything magical about consciousness - it is something that is presumably nearly universally held by people, and no one on this thread has suggested a supernatural explanation for it. Just because we don't as-of-yet have an objective metric for consciousness in others does not make it magical.
It's not so much that I'm doubting whether I'm conscious, but rather I'm doubting whether I can figure out whether I'm conscious.
If you don't doubt you are conscious, I'm not sure why you would need to figure out whether you are conscious - it seems to me that you already know based on direct experience.
Just like you can't give me a description of consciousness, and you can't give me a description of "pondering your own consciousness", you can't give me a description of "first person experiences" either.
That these things are difficult to describe is not in dispute; that is what I meant when I said "consciousness seems to defy precise definitions". But, we can still talk about them as there seems to be a shared understanding of the concepts.
One need not have a precise definition of a thing to discuss and believe in that thing or to know that one is effected by that thing. For example, consider someone unschooled in physics beyond a grade-school level. He/she knows about gravity, knows that he/she is subject to the effects of gravity and can make (qualitative) predictions about the effects of gravity, even if he/she cannot say whether gravity is a force, a warping of spacetime, both of these things, neither of these things, or even understand the distinction. Similarly, there is enough of a common understanding of consciousness and first person experiences for a person to be confident that she/he is conscious and has first person experiences.
I do agree that the lack of precise definition (and, more importantly, the lack of measurable or externally observable manifestations) makes it impossible (at the present) for an observer to know whether some other entity is conscious.
How do I know that some activity is "pondering your own consciousness"?
Isn't that what you were doing when you said "Can I be sure that I'm conscious"?
It seems to me that one's own consciousness is beyond dispute if one is able to think about things (including but not limited to one's own consciousness) and have first-person experiences. Even if one disputes the consciousness of others (for example, if one is a solipsist), I don't see how anyone can reasonably doubt his/her own consciousness.
Nobody can give me a description of consciousness
True, consciousness seems to defy precise definitions.
Can I be sure that I'm conscious?
It seems to me that consciousness as commonly understood is necessary for having first-person experiences of the sort that I have, and presumably you have also. And I suspect that pondering your own consciousness implies that you are in fact conscious.
There's quite a lot of Andreyev's work available in English. Some translations are apparently in the public domain as they are available for free on Amazon in ebook form. I don't really enjoy reading plays as a rule (The Black Masks is a play, I believe), so I downloaded the novella The Seven Who Were Hanged. It'll be a while before I get around to reading it, as my reading list is fairly long (and getting longer, thanks to your great suggestions!).
Is The Seven Who Were Hanged a good introduction to Andreyev?
I just ordered the volume containing Lieutenant Kije and Young Vitushishnokov. I'm in the middle of a couple of things already though, so I may not get started on Tynyanov right away. I'm looking forward to it though - thanks for the recommendation!
Also - you are working on a translation, aren't you? How's that going? And, is it a translation into English?
Amazon lists a volume containing English translations of two novellas by Tynyanov - Lieutenant Kije and Young Vitushishnokov. Are either of those good choices as introductions to Tynyanov?
The same holds for translations from Russian to English. For example, Constance Garnett's translation of The Brothers Karamazov is quite different from the Pevear/Volokhonsky translation. It seemed to me that Dostoyevsky's dark humor was better captured in the Pevear/Volokhonsky translation. The Pevear/Volokhonsky translation was quite enjoyable, IMO.
Regarding mapping versus description: I agree that my motivations were semantic rather than syntactic. I just wanted to know whether the idea I had made sense to others who know something of intuitionistic logic.
Understood. But, the point that I raised is not merely syntactic. On a fundamental level, a description of the territory is a map, so when you attempt to contrast correcting a map vs rejecting a description of a territory, you are really talking about correcting vs. rejecting a map.
Does it make sense to say that 1 is the strategy of correcting a map and 2 is the strategy of rejecting a description as inaccurate without seeking to correct something?
Yes, in the case of number 1 you have proved via contradiction that there is no red cube, and in #2 you have concluded that one or more of your assumptions is incorrect (i.e. that your map is incorrect). However, this is not a map vs. territory distinction; in both cases you are really dealing with a map. To make this clear, I would restate as:
1 is the strategy of correcting the map and 2 is the strategy of rejecting the map as inaccurate without seeking to correct it.
So, I guess I don't really have anything additional to add about intuitionistic logic - my point is that when you talk about a description of the territory vs. a map, you are really talking about the same thing.
Also possibly problematic is the dichotomy described by the summary:
classical logic is the logic of making a map accurate by comparing it to a territory, which is why the concept of falsehood becomes an integral part of the formal system. In contrast, intuitionistic logic is the logic of describing a territory without seeking to compare it to something else. Intuitionistic type theory turns up type errors, for example, when such a description turns out to be inconsistent in itself.
seems more appropriate to contrast scientific/Bayesian reasoning, which strives to confirm or refute a model based on how well it conforms to observed reality vs deductive (a priori) reasoning, which looks only at what follows from a set of axioms. However, one can reason deductively using classical or intuitionistic logic, so it is not clear that intuitionistic logic is better suited than classical logic for "describing a territory without seeking to compare it to something else".
I can't shake the idea that maps should be represented classically and territories should be represented intuitionistically.
But, it seems to me that a map is a representation of a territory. So, your statement “maps should be represented classically and territories should be represented intuitionistically” reduces to “representations of the territory should be intuitionistic, and representations of those intuitionistic representations should be classical”. Is this what you intended, or am I missing something?
Also, I’m not an expert in intuitionistic logic, but this statement from the summary sounds problematic:
classical logic is the logic of making a map accurate by comparing it to a territory, which is why the concept of falsehood becomes an integral part of the formal system
But, the concept of falsehood is integral to both classical and intuitionistic logic. Intuitionistic logic got rid of the principle of the excluded middle but did not get rid of the concept of falsity.
There are plenty of Christians who would disagree (or, more precisely, would say that a belief in a recent origin of human life along the lines of the story in Genesis is central, on the grounds that the New Testament draws analogies between Adam and Christ that don't work if there was not a historical Adam with the right characteristics).
Regarding Adam - yes I think that Catholics in particular are committed to a belief that there was an actual Adam and an actual Eve. However, as far as I know, they are not committed to any particular time-line as to when the actual Adam and the actual Eve lived (nor are they committed to all of Genesis being literal). So, I don't think that this counts as modern Christians necessarily believing in a recent origin of human life, much less in a recent origin of life in general.
I think an error can be serious without being central to Christian doctrine
Fair enough - we can agree to disagree about that. I just don't see how pre-modern Christians having an incorrect belief regarding a non-central (to Christianity) scientific fact in the absence of any significant evidence that their belief is wrong is particularly problematic.
many Christians have trouble applying that evidence to their own religion
I think that we have an area of agreement here - I think that the argument that we should believe in Christianity because there is a long tradition of people who believe in Christianity is, by itself, quite weak.
it does mean that the Christian tradition was capable of prolonged serious error.
I don't know that I would classify the error as serious; a belief in a recent origin of life it is not central to Christian doctrine. None of the core tenants of Christianity are dependent on a recent origin of life. Nor is correctness regarding the age of life instrumentally important in the typical person's day-to-day non-religious activities. And, it is not the case that the Christian community as a whole (obviously there are some exceptions) hung on to this belief once strong contrary evidence became available.
there is a difference between not knowing something and confidently believing something that is false
This is true. But, I suspect that rather than confidently believing in a recent origin of life, a lot of pre-modern Christians simply did not give the topic much thought one way or the other. And, it seems to me that holding an incorrect belief in the absence of evidence against the belief is a relatively minor failing, particularly if that belief is a non-central one.
arguments of the form "X is more likely to be true, because look at this lengthy tradition of people who believed it" -- which is actually an argument with some strength; people believe true things more often than otherwise similar false things -- are weaker than they would be without such mistakes in the history of that tradition
But, we already have lots of evidence that a lengthy tradition of belief in something does not imply that the thing is true. So, premodern Christian belief in a recent origin of life does little to weaken the argument that X is probably true if there is a lengthy tradition of people believing X (since the argument was IMO already quite weak to begin with).
I'm pretty sure that until, say, 250 years ago at least 90% of the world's Christians, and a sizeable majority even of the world's best-informed Christians, believed that the origin of life is very recent.
I don't know if that is true or not, but it sounds plausible. However, 250 years ago no one had a justified, accurate estimate of how long ago life originated - the science behind that had not been done yet. So, I do not see how the fact (if fact it be) that most Christians had an inaccurate idea about how old life is has any relevance to whether or not Christianity is true.
factually incorrect (recent origin of life)
The claim of a recent origin of life is not very central to Christianity. In fact, I believe this is a minority position among Christians world-wide. Did you intend it as a factually incorrect claim of Christianity, or as a factually incorrect claim of a particular flavor of Christianity (e.g. fundamentalist)?
I want a fairly simple and archetypal experiment the AI finds itself in where it tricks the researchers into escaping by pretending to malfunction or something. ... Also, has this sort of thing been done before?
The 2015 movie Ex Machina deals with something like this. IMO it was an outstanding movie, albeit it was not a complete/perfect depiction of AI risk as generally understood by LWers.
Per the article:
Droplets can also seem to “tunnel” through barriers, orbit each other in stable “bound states,” and exhibit properties analogous to quantum spin and electromagnetic attraction. When confined to circular areas called corrals, they form concentric rings analogous to the standing waves generated by electrons in quantum corrals.
and
Like an electron occupying fixed energy levels around a nucleus, the bouncing droplet adopted a discrete set of stable orbits around the magnet, each characterized by a set energy level and angular momentum.
But the situation is not as bad as you make it out. Most people do have something they can sell (even if they have little or no wealth) - their labor - i.e. they can get a job. In fact, the majority of people (in the US, anyway) get by mostly by their salary or wages - they sell their labor to their employer. So, a person with no wealth today need not be a person with no wealth tomorrow.
See astronomer Fred Hoyle's A For Andromeda for a fictional exploration of the idea (and a pretty good novel).
Then why the hell is that written in the bronze age book that you claim knowingly predicted this outcome?
The New Testament is not really a bronze age book. Wikipedia states that the bronze age ended in the near east region around 1200 BC.
But, even a moral realist should not have 100% confidence that he/she is correct with respect to what is objectively right to do. The fact that 100% of humanity is morally appalled with an action should at a minimum raise a red flag that the action may not be morally correct.
Similarly, "feeling icky" about something can be a moral intuition that is in disagreement with the course of action dictated by one's reasoned moral position. it seems to me that "feeling icky" about something is a good reason for a moral realist to reexamine the line of reasoning that led him/her to believe that course of action was morally correct in the first place.
It seems to me that it is folly for a moral realist to ignore his/her own moral intuitions or the moral intuitions of others. Moral realism is about believing that there are objective moral truths. But a person with 100% confidence that he/she knows what those truths are and is unwilling to reconsider them is not just a moral realist, he/she is also a fanatic.
Ah - got it.
To avoid splintering the community, my suggestion would be that if someone wants to make a <500 character post, they could just make in on lesswrong.com, perhaps on open thread. After all, we don't have a minimum post length.
The description on the landing page of lesswrong.io is:
This is a community for people who are interested in Rationality, Cognitive Science, Technology, Philosophy, and related subjects. Our goal is to share and discuss insightful ideas that help us to improve our reasoning and decision-making skills.
But that sounds like it could be a description of lesswrong.com. Is lesswrong.io intended to be a replacement for lesswrong.com? If so, is there a plan for deprecating lesswrong.com and migrating the user base over to lesswrong.io? If not, is seems to me that having two different forums with the same purpose could actually splinter rather than revitalize the community.
Are there any suggestions for what sorts of discussions the io site is for vs what sorts of discussions the .com site is for?