Posts
Comments
I'm upvoting this because the community could use more content commonly held views, and some people do need to treat Eliezer as more fallible than they do.
That said, I find most of your examples unpersuasive. With the exception of some aspects of p-zombies, where you do show that Eliezer has misinterpreted what people are saying when they make this sort of argument, most of your arguments are not compelling arguments at all that Eliezer is wrong, although they do point to his general overconfidence (which seems to be a serious problem).
For what it is worth, one of my very first comments [Was objecting to Eliezer's use of phlogiston as an example of a hypothesis that did not generate predictions](https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality?commentId=4Jch5m8wNg8pHrAAF).
What does ELK stand for here?
This is probably the best argument I have seen yet for being concerned about what things like GPT are going to be able to do. Very eye opening.
66.42512077294685%
This should not be reported this way. It should be reported as something like 66%. The other digits are not meaningful.
I don't know of any broader, larger trends. It is worth noting here that the Rabbis of the Talmud themselves thought that the prior texts (especially the Torah itself) were infallible, so it seems that part of what might be happening is that over time, more and more gets put into the very-holy-text category.
Also, it seems important to distinguish here between being unquestionably correct with being literal. In a variety of different religions this becomes an important distinction and often a sacrifice of literalism is in practice made to preserve correctness of a claim past a certain point. Also note that in many religious traditions, the traditions which are most literal try to argue that what they are doing is not literalism but something more sophisticated. For example, among conservative Protestants it isn't uncommon to claim that they are not reading texts literally but rather using the "historical-grammatical method."
MWI doesn't say anything about other constants- the other parts of our wavefunction should have the same constants. However, other multiverse hypotheses do suggest that physical constants could eb different.
That seems like an accurate analysis.
I'm actually more concerned about an error in logic. If one estimates a probability of say k that in a given year that climate change will cause an extinction event, then the probability of it occurring in any given string of years is not the obvious one, since part of what is going on in estimating k is the chance that climate change can in fact cause such an incident.
Mainstream discussion of existential risk is becoming more of a thing, A recent example is this article in The Atlantic. They do mention a variety of risks but focus on nuclear war and worst case global warming.
When people arguing with VoiceOfRa got several downvotes in a row, the conclusion drawn was sockpuppets.
There was substantially more evidence that VoiceOfRa was downvoting in a retributive fashion, including database evidence.
Slashdot had Karma years before Reddit and was not nearly as successful. Granted it didn't try to do general forum discussions but just news articles, but this suggests that karma is not the whole story.
Further possible evidence for a Great Filter: A recent paper suggests that as long as the probability of an intelligent species arising on a habitable planet is not tiny, at least about 10^-24 then with very high probability humans are not the only civilization to have ever been in the observable universe, and a similar result holds for the Milky Way with around 10^-10 as the relevant probability. Article about paper is here and paper is here.
The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task.
I'm not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.
Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing,
Do you have a citation for this? My understanding was that in many fields experts perform better than nonexperts. The main thing that experts share in common with non-experts is overconfidence about their predictions.
If people want to lock in their predictions they can do so on Prediction Book here.
I am not making claims about "any sense of order", but going by what I read European police lost control of some chunks of its territory.
In this context that's what relevant, since VoiceOfRa talked about "European countries that have given up enforcing any sense of order in large parts of their major cities." If you aren't talking about that then how is it a relevant response?
Can you explain why you see a SETI attack as so high? If you are civilization doing this not only does it require extremely hostile motivations but also a) making everyone aware of where you are (making you a potential target) and b) being able to make extremely subtle aspects of an AI that apparently looks non-hostile and c) is something which declares your own deep hostility to anyone who notices it.
What probability do you assign to this happening? How many conjunctions are involved in this scenario?
Yes, that would work. I think I was reacting to the phrasing more and imagined something more cartoonish, in particularly where the air conditioner is essentially floating in space.
You seem to be operating under the impression that subjective Bayesians think you Bayesian statistical tools are always the best tools to use in different practical situations? That's likely true of many subjective Bayesians, but I don't think it's true of most "Less Wrong Bayesians."
I suspect that there's a large amount of variation in what "Less Wrong Bayesians" believe. It also seems that at least some treating it more as an article of faith or tribal allegiance than anything else. See for example some of the discussion here.
What do you see as productive in asking this question?
Expanding the orbit of the Earth works under the known laws of physics but wouldn't be practically doable at all. A giant air conditioner wouldn't work for simple physics reasons.
Problems can have a mathematical aspect without being completely solvable by math.
The sourcing there is weak and questionable at best. That people assert that areas are "no-go" is pretty different than there being a genuine lack of any sense of order, and that's even before one looks at the issue of whether this is any different from some areas simply being higher in crime than others.
Still reading, quick note:
tradion
Should be tradition?
That seems to indicate that summarizing what they've said as the average age of death being 72 years is not accurate.
Not far indeed: global life expectancy at birth was 26 years in the Bronze Age, and in 2010 was 67.2. Five years ago our life expectancy at birth was more than double what it had been.
This is a little misleading because low life expectancy at birth was to a large extent a function of very high infant mortality. It is true that even if one takes into account infant mortality (for example by looking at life expectancy at three years of age) that life expectancy has gone up. However, this is primarily average life expectancy. Maximum life expectancy has barely budged. This is sometimes referred to as rectangularization of mortality curves.
I do think it is likely that we are going to see substantial improvements in maximum life expectancy in the next few years, but the change in life expectancy up to this time isn't really indicative of it.
Good analysis! A few remarks:
In practice even for a planet with as thin an atmosphere as Earth, getting past the atmosphere is more difficult than actually reaching escape velocity. One of the most common times for a rocket to break up is near Max Q which is where maximum aerodynamic stress occurs. This is generally in the range of about 10 km to 20 km up.
In worlds too big to escape by propulsion, people may come up with the idea of the space elevator, but the extra gravity will require taking into account the structure's weight.
Getting enough mass up there to build a space elevator is itself a very tough problem.
Some world out there may have a ridiculously tall mountain that extends into the upper atmosphere. Gravity at the top will be lower, and if a launch platform can be built there, takeoff will be easier. Of course, this is an "if" bigger than said mountain.
Whether gravity is stronger or weaker on top of a mountain is surprisingly complicated and depends a lot on the individual planet's makeup. However, at least on Earth-like planets it is weaker. See here. Note though that if a planet is really massive it is less likely to have large mountains. You can more easily get large mountains when a planet is small. (e.g. Olympus Mons on Mars).
India has a huge coastline, but for mythical/cultural reasons, Hinduism used to have a taboo against sea travel. In the worst scenario, our heavy aliens may stay on ground, not because they can't, but because they won't; maybe their atmosphere looks too scary or their planet attracts too many meteorites or it has several ominous-looking moons or something.
This would require everyone on the planet to take this same attitude. This seems unlikely to be common.
Or there are fewer civilizations than we expect, or something is wiping out civilizations once they go to space, or most species for whatever reason decide not to go to space, or we are living in an ancestor simulation which only does a detailed simulation of our solar system. (I agree that all of these are essentially wanting, your interpretation makes the most sense, these examples are listed more for completeness than anything else.)
Anyone want to take bets on whether or not this will turn out in ten years to be natural?
I don't think this conversation is being very productive so this is likely my final reply.
Just answer me a simple question.
? How do the first 1000 naturals look like, after mixing supertask described above has finished its job,
You may say that this supertask is impossible.
You may say that there is no set of all naturals.
The resulting pointwise limit exists, and it gives each positive integer a probability of zero. This is fine because the pointwise limit of a distribution on a countable set is not necessarily itself a distribution. Please take a basic real analysis course.
I don't give a damn about infinity. If it is doable, why not? But is it? That's the only question.
I'm not sure what you mean by this, especially given your earlier focus on whether infinity exists and whether using it in physics is akin to religion. I'm also not sure what "it" is in your sentence, but it seems to be the supertask in question. I'm not sure in that context what you mean by "doable."
Then, a supertask mixes the infinite set of naturals and we are witnessing "the irresistible force acting on an unmovable object". What the Hell will happen? Will we have finite numbers on the first 1000 places? We should, but bigger, no matter which will be.
The "irresistible force" is just an empty word. And so is "unmovable object". And so is "infinity" and so is "supertask".
I'm not at all sure what this means. Can you please stop using analogies can make a specific example of how to formalize this contradiction in ZFC?
The topic is also exercised here:
http://mathforum.org/kb/thread.jspa?forumID=13&threadID=2278300&messageID=7498035
This seems to be essentially the same argument and it seems like the exact same problem: an assumption that an intuitive limit must exist. Limits don't always exist when you want them to, and we have a lot of theorems about when a point-wise limit makes sense. None of them apply here.
This is not at all an attempt to banish infinity in any general sense.
Of course it is. Nothing infinite has been spotted so far.
I'm not sure how your sentence is a response to my sentence.
This is rhetoric without content.
Is it? Is this same "rhetoric" against aliens also without a content? If I say that people want aliens, because they have lost angels, is this really without a content?
Not only that there is no infinite God, even infinite sets are probably just a miracle.
Generally, yes, the content level is pretty low. It essentially amounts to Bulverism, where one is focusing on claimed intents and motives rather than focusing on the substantive issue of whether there's an inconsistency in PA or ZFC that can arise due to issues with supertasks or other ideas related to infinity.
It may well be that specific people or groups are adopted aliens in a way that is essentially replacing deities. The Raelians and other New Age groups certainly fall into that categoyr. But it is a mistake to therefore claim that in general, people believe in aliens as a replacement for belief in a deity. And it is an even more serious mistake to make such claims about infinite sets. If you see physicists praying to infinite sets, or claiming that infinite sets are responsible for the creation of the universe or humanity, or claim that infinite sets will somehow save us, or claim that infinite sets have an agency to them, or claim that infinite sets have a special mystery and majesty to them that merits worship, or if they start wars with or excommunicate people who don't believe in infinite sets or believe in a different type of infinite set, then there would be an argument.
Ah, yes, I think that makes sense. And obviously a proof of say Friendliness in ZFC is a lot better than no proof at all.
I'm not sure what you mean by this, and in so far as I can understand it doesn't seem to be true. Physicists use the real numbers all the time which are an infinite set.
The problem there is that certain specific models of physics end up giving infinite values for measurable quantities - this is a known problem and has been an area of active research since early work with renormalization in the 1930s. This is not at all an attempt to banish infinity in any general sense.
Now, when there in no God, the Infinity is its substitute, most people would love to exist. But it's just another blunder.
This is rhetoric without content.
I'm not sure what your point is here. Yes, experts sometimes have a consensus that turns out to be wrong. If one is lucky one can even turn out to be right when the experts are wrong if one takes sufficiently many contrarian positions (although the idea that many millions of civilizations in our galaxy was a universal among both biologists and astro-biologists is definitely questionable), but in this case, the experts have really thought about these ideas a lot, and haven't gotten anywhere.
If you prefer an example other than Wildberger, when Edward Nelson claimed to have a contradiction in PA, many serious mathematicians looked at what he had done. It isn't like there's some special mathematical mob which goes around suppressing these things. I literally had a lunch-time conversation a few days ago with some other mathematician where the primary topic was essentially if there is an inconsistency in ZFC where would we expect to find it and how much of math would likely be salvageable? In fact, that conversation was one of the things that lead me along to the initial question in this subthread.
I am not afraid of mathematicians more than of astrobiologists. Largely unimpressed.
Neither of these groups are groups you should be afraid of and I'm a little confused as why you think fear should be relevant.
I'm not sure that's strong evidence for the thesis in question. If ZFC had a low-lying inconsistency, ZFC+an inaccessible cardinal would still prove ZFC consistent, but it would be itself an inconsistent system that was effectively lying to you. Same remarks apply to any large cardinal axiom.
What do you mean?
Physics is only good, when you expel all the infinities out of it.
I'm not sure what you mean by this, and in so far as I can understand it doesn't seem to be true. Physicists use the real numbers all the time which are an infinite set. They use integration and differentiation which involves limits. So what do you mean?
I'm not sure why you think that. This may depend strongly on what you mean by an in infinitary method. Is induction infinitary? Is transfinite induction infinitary?
If the hypothetical external world in question diverges from our own world by a lot then the ancestor simulation argument loses all force.
Wildberger's complaints are well known, and frankly not taking very seriously. The most positive thing one can say about it is that some of the ideas in his rational trignometry do have some interesting math behind them, but that's it. Pretty much no mathematican who has listened to what he has to say have taken any of it seriously.
What you are doing in many ways amounts to the 18th and early 19th century arguments over whether 1-1+1-1+1-1... converged and if so to what. First formalize what you mean, and then get an answer. And a rough intuition of what should formally work that leads to a problem is not at all the same thing as an inconsistency in either PA or ZFC.
Phrasing it as a "super-task" relies on intuitions that are not easily formalized in either PA or ZFC. Think instead in terms of a limit, where your nth distribution and let n go to infinity. This avoids the intuitive issues. Then just ask what mean by the limit. You are taking what amounts to a pointwise limit. At this point, what matters then is that it does not follow that a pointwise limit of probability distributions is itself a probability distribution.
If you prefer a different example that doesn't obfuscate as much what is going on we can do it just as well with the reals. Consider the situation where the nth distribution is uniform on the interval from n to n+1. And look at the limit of that (or if you insist move back to having it speed up over time to make it a supertask). Visually what is happening each step is a little 1 by 1 square moving one to the right. Now note that the limit of these distributions is zero everywhere, and not in the nice sense of zero at any specific point but integrates to a finite quantity, but genuinely zero.
This is essentially the same situation, so nothing in your situation has to do with specific aspects of countable sets.
The limit of your distributions is not a distribution so there's no problem.
If there's any sort of inconsistency in ZF or PA or any other major system currently in use, it will be much harder to find than this. At a meta level, if there were this basic a problem, don't you think it would have already been noticed?
At least two major classes of existential risk, AI and physics experiments, are areas where a lot of math can come into play. In the case of AI, this is understanding whether hard take-offs are possible or likely and whether an AI can be provably Friendly. In the case of physics experiments, the issues connected to are analysis that the experiments are safe.
In both these cases, little attention is made to the precise axiomatic system being used for the results. Should this be concerning? If for example some sort of result about Friendliness is proven rigorously, but the proof lives in ZFC set theory, then there's the risk that ZFC may turn out to be inconsistent. Similar remarks apply to analysis that various physics experiments are unlikely to cause serious problems like a false vacuum collapse.
In this context, should more resources be spent on making sure that proofs occur in their absolute minimum axiomatic systems, such as conservative extensions of Peano Arithmetic or near conservative extension?
Yes, but there's less reason for that. A big part of the problem with neutrinos is that since only a small fraction are absorbed, it becomes much harder to get good data on what is going on. For example, the typical neutrino pulse from a supernova is estimated to last 5 seconds to 30 seconds, while the Earth is under a tenth of a second in diameter. Gamma rays don't have quite as much of this problem and we can sort of estimate their directional data better.
On the other hand, the more recent work with neutrinos has been getting better and better at getting angle data which lets us get the same directional data to some extent.
You do know that both sets of ideas predate HPMOR, right?
Slightly crazy idea I've been bouncing around for a while: put giant IceCube style neutrino detectors on Mars and Europa. Europa would work really well because of all the water ice. This would allow one to get time delay data from neutrino bursts during a supernova to get very fast directional information as well as some related data.
That's a rule I'd strongly support other than in cases of absolutely unambiguous spamming or clear sockpuppets of banned individuals.