Posts
Comments
I was watching part of your video, and I'm really surprised that you think that LessWrong doesn't have what you call "paths fowrard", that is, ways for people who disagree to find a path towards considering where they may be wrong and trying to hear the other person's point of view. In fact, that's actually a huge focus around here, and a lot has been written about ways to do that.
I certanly think you're right, that the conscious mind and conscious decisions can to a large extent re-write a lot of programming of the brain.
I am surprised to think that you think that most rationalists don't think that. (That sentence is a mouthful, but you know what I mean.) A lot of rationalist writing is devoted to working on ways to do exactally that; a lot of people have written about how just reading the sequences helped them basically repogram their own brain to be more rational in a wide variety of situations.
Are there a lot of people in the rationalist community who think that conscious thought and decision making can't do major things? I know there are philosophers who think that maybe consciousness is irrelevant to behavior, but that philosophy seems very much at odds with LessWrong-style rationality and the way people on LessWrong tend ot think about and talk about what consciousness is.
He's not a superhuman intelligent paperclipper yet, just human level.
Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.
That certanly is not true of me or of my life overall, except during a few short periods. I don't have the same access to other people's internal state, but I doubt it is true of most people.
There certanly are a significant number people who it may be true of, people who suffer from depression or chronic pain or who are living in other difficult circumstances. I highly doubt that that's the majority of people, though.
Yeah, I'm not sure how to answer this. I would do one set of answers for my personal social environment and a completely different set of answers for my work environment, to such a degree that trying to just average them wouldn't work. I could pick one or the other.
Reference: I teach in an urban high school.
I didn't even know that the survey was happening, sorry.
If you do decide to keep running the survey for a little longer, I'd take it, if that data point helps.
I think you need to try and narrow your focus on exactly what you mean by a "futurist institute" and figure out what specifically you plan on doing before you can think about any of these issues.
Are you thinking about the kind of consulting agency that companies get advice from on what the market might look like in 5 years and what technologies their competitors are using? Or about something like a think-tank that does research and writes papers with the intent on influencing political policy, and is usually supported by donations? Or an academic group, probably tied to a university, which publishes academic papers, similar to what Nick Bostrom does at Oxford? Or something that raises money primarily for scientific and technological research? Or maybe an organization similar to H+ that tries to spread awareness of transhumanist/ singularity related issues, publishes newsletters, has meetings, and generally tries to change people's minds about futurist, technological, AI, and/or transhumanist issues? Or something else entierly?
Basically, without more details about exactly what you are trying to do, I don't think anyone here is going to be able to offer very good advice. i suspect you may not be sure yourself yet, so maybe the first step is to try to think about the different options and try to narrow your initial focus a bit.
The best tradeoff is when you are well calibrated, just like with everything else.
"Well calibrated" isn't a simple thing, though. It's always a conscious decision of how willing you are to tolerate false positives vs false negatives.
Anyway, I'm not trying to shoot you down here; I really did like your article, and I think you made a good point. Just saying that it's possible to have a great insight and still overshoot or over-correct for a previous mistake you've made, and if you think that almost everyone you see is suffering, you may be doing just that.
There has to be some kind of trade-off here between false positives and false negatives here, doesn't there? If you decide to "use that skill" to see more suffering, isn't it likely that you are getting at least some false positives, some cases where you think someone is suffering and they aren't?
If "happiness" is too vague a term or has too many other meanings we don't necessarily want to imply, we could just say "positive utility". As in "try to notice when you or the people around you are experiencing positive utility".
I do think that actually taking note of that probably does help you move your happiness baseline; it's basically a rationalist version of "be thankful for the good things in your life". Something as simple as "you know, I enjoy walking the dog on a crisp fall day like this". Noticing when other people seem to be experiencing positive utility is also probably important in becoming a more morally correct utilitarian yourself, likely just as important as noting other people's suffering/ negative utility.
Really interesting essay.
It also made me wonder if the opposite is also a skill you need to learn; do people need to learn how to see happiness when that happens around them? Some people seem strangely blind to happiness, even to their own.
To take this a step farther; while this doesn't prove we're not in a simulation, I think if you accept that our universe can't be simulated from a universe that looks like ours, it destroys the whole anthro/ probability argument in favor of simulations, because that argument seems to rely on the claim that we will eventually create a singularity which will simulate a lot of universes like ours. If that's not possible, then the main positive argument for the simulation hypothesis gets a lot weaker, I think.
Maybe there's a higher level universe with more permissive computational constraints, maybe not, but either way I'm not sure I see how you can make a probability argument for or against it.
Does the information theory definition of entropy actually correspond to the physics definition of entropy? I understand what entropy means in terms of physics, but the information theory definition of the terms seemed fundamentally different to me. Is it, or does one actually correspond to the other in some way that I'm not seeing?
Yeah, that's the issue, then. And there's no way around that, no way to just let us temporally log in and confirm our emails later?
I guess, but it's cheaper to observe the sky in reality then it is on youtube. To observe the sky, you just have to look out the window; turning on your computer costs energy and such.
So in order for this to be coherent, I think you have to somehow make the case that our reality is in some extent rare or unlikely or expensive, and I'm not sure how you can do that without knowing more about the creation of the universe then we do, or how "common" the creation of universes is over...some scale (not even sure what scale you would use; over infinite periods of time? Over a multiverse? Does the question even make sense?)
In the simplest example, when you have a closed system where part of the system starts out warmer and the other part starts out cooler, it's fairly intuitive to understand why entropy will usually increase over time until it reaches the maximum level. When two molecule molecule of gas collide, a high-energy (hot) molecule and a lower-energy (cooler) molecule, the most likely result is that some energy will be transferred from the warmer molecule to the cooler molecule. Over time, this process will result in the temperature equalizing.
The math behind this process and how it related to entropy isn't that complicated, look up the Clausius theorem. https://en.wikipedia.org/wiki/Clausius_theorem
It doesn't seem to be working for me; I tried to reset my password, and it keeps saying "user not found", although I doublechecked and it is the same email I have on my account here on lesswrong.
It seems weird to place a "price" on something like the Big Bang and the universe. For all we know, in some state of chaos or quantum uncertainty, the odds of something like a Big Bang happening eventually approaches 100%, which makes it basically "free" by some definition of the term. Especially if something like the Big Bang and the universe happens an infinite number of times, either sequentially or simultaneously.
Again, we don't know that that's true, but we don't know it's not true either.
Yeah, I saw that. In fact looking back on that comment thread, it looks like we had almost the exact same debate there, heh, where I said that I didn't think the simulation hypothesis was impossible but that I didn't see the anthropic argument for it as convincing for several reasons.
But I don't see a practical reason to run few minutes simulations
The main explanation that I've seen for why an advanced AI might run a lot of simulations is in order to better predict how humans would react in different situations (perhaps to learn to better manipulate humans, or to understand human value system, or maybe to achieve whatever theoretically pro-human goal was set in the AI's utility function, ect). If so, then it likely would run a very large number of very short simulations, designed to put uploaded minds in very specific and very clearly designed unusual situations, and then end the simulation shortly afterwards. Likely if that was the goal it would run a very large number of iterations on the same scenario, each time varying the details ever so slightly, in order to try to find out exactally what makes us tick. For example, instead of philosophizing about the trolley car problem, it might just put a million different humans into that situation and see how each one of them reacts, and then iterate the situation ten thousand times with slight variations each time to see which variables change how humans will react.
If an AI does both (both short small-scale simulations and long universe-length simulations), then the number of short simulations would massively outnumber the number of long simulations, you could run quadrillions of them for the same resources as it takes to actually simulate an entire universe.
If you're in a simulation, the only reference class that matters is "how long has the simulation been running for". And most likely, for anyone running billions of simulations, the large majority of them are short, only a few minutes or hours. Maybe you could run a simulation that lasts as long as the universe does in subjective time, but most likely there would be far more short simulations.
Basically, I don't think you can use the doomsday argument at all if you're in a simulation, unless you know how long the simulation's been running, which you can't know. You can accept either SA or DA, but you can't use both of them at the same time.
The specific silliness of "humans before business" is pretty straightforward: business is something humans do, and "humans before this thing that humans do" is meaningless or tautological. Business doesn't exist without humans, right?
Eh, it's not as absurd as that. You know how we worry that AI's might optimize something easily quantifiable, but in a way that destroys human value? I think it's entierly reasonable to think that businesses may do the same thing, and optimize for their own profit in a way that destroys human value in general. For example, the way Facebook is to a significant extent designed to maximize getting clicks and eyeballs in manipulative ways that do not actually serve human communication needs for the users.
Ideally, you would want to generate enough content for the person who wants to read LW two hours a day, an then promote or highlight the best 5%-10% of the content so someone who has only two hours a week can see it.
Everyone is much better off that way. The person with only two hours a week is getting much better content then if there was much less content to begin with.
Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.
Sure, and if we had anything like the amount of evidence we have for antropic probability theories that we do for quantum theory I'd be glad to go along with it. But short of a lot of evidence, you should be more skeptical of theories that imply all kinds of improbable results.
As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class.
I don't see that at all. Why not classify yourself as "part of an intelligent species that has nuclear weapons or otherwise poses an existential threat to itself"? That seems like just as reasonable a classification as any (especially if we're talking about "doomsday"), but it gives a very different (worse) result. Or, I donno, "part of an intelligent species that has built an AI capable of winning at Go?" Then we only have a couple more months. ;)
It also seems weird to just assume that somehow today is a normal day in human existence, no more or less special then any day any random hunter-gatherer wandered the plains. If you have some a priori reason to think that the present is unusual, you should probably look at that instead of vague anthropic arguments; if you just found out you have cancer and your house is on fire while someone is shooting at you, it probably doesn't make sense to just ignore all that and assume that you're halfway through your lifespan. Or if you were just born 5 minutes ago, and seem to be in a completely different state then anything you've ever experienced. And we're at a very unique point here in the history of our species, right on the verge of various existential threats and at the same time right on the verge of developing spaceflight and the kind of AI technology that would likely ensure our decedents may persist for billions of years. isn't it more useful to look at that instead of just assuming that today is just another day in humanity's life like any other?
I mean, it seems likely that we're already waaaaaay out on the probability curve here in one way or another, if the Great Silence of the universe is any guide. There can't have been many intelligent species who got to where we are in the history of our galaxy, or I think the galaxy would look very different.
Let me give a concrete example.
If you take seriously the kind of anthropic probabilistic reasoning that leads to the doomsday argument, then it also invalidates the same argument, because we probably aren't living in the real universe at all, we're probably living in a simulation. Except you're probably not living in a simulation because we're probably living in a short period of time of quantum randomness that appears long after the universe ends which recreates you for a fraction of a second through random chance and then takes you apart again. There should be a vast number of those events that happen for every real universe and even a vast number of those events for every simulated universe, so you probably are in one of those quantum events right now and only think that you existed when you started reading this sentence.
And that's only a small part of the kind of weirdness these arguments create. You can even get opposite conclusions from one of these arguments just by tweaking exactly what reference class you put things in. For example, "i should be roughly the average human" gives you an entierly different doomsday answer then "i should be roughly the average life form" which gives you an entierly different answer then "I should be roughly the average life form that has some kind of thought process". And there's no clear way to pick a category; some intuitively feel more convincing then others but there's no real way to determine that.
Basically, I would take the doomsday argument (and the simulation argument, for that matter) a lot more seriously if anthropic probability arguments of that type didn't lead to a lot of other conclusions that seem much less plausible, or in some cases seem to be just incoherent. Plus, we don't have a good way to deal with what's known as "the measurement problem" if we are trying to use anthropic probability in an infinite multiverse, which throws a further wrench into the gears.
A theory which fits most of what we know but gives one or a few weird results that we can test is interesting. A theory that gives a whole mess of weird and often conflicting results, many of which would make the scientific method itself a meaningless joke if true, and almost none of which are testable, is probably flawed somewhere, even if it's not clear to us quite where.
I think the argument probably is false, because arguments of the same type can be used to "prove" a lot of other things that also clearly seem to be false. When you take that kind of anthropomorphic reasoning and take it to it's natural conclusion, you reach a lot of really bizzare places that don't seem to make sense.
In math, it's common for a proof to be disputed by demonstrating that the same form of proof can be used to show something that seems to be clearly false, even if you can't find the exact step where the proof went wrong, and I think the same is true about the doomsday argument.
Maybe; there certanly are a lot of good rationalist bloggers who have at least at some point been interested in LessWrong. I don't think bloggers will come back though unless the site first becomes more active then it currently is. (They may give it a chance after the Beta is rolled out, but if activity doesn't increase quickly they'll leave again.) Activity and an active community is necessary to keep a project like this going. Without an active community here there's no point in coming back here instead of posting on your own blog.
I guess my concern here though is that right now, LessWrong has a "discussion" side which is a little active and a "main" side which is totally dead. And it sounds like this plan would basically get rid of the discussion side, and make it harder to post on the main side. Won't the most likely outcome just be to lower the amount of content and the activity level even more, maybe to zero?
Fundamentally, I think the premise of your second bottleneck is incorrect. We don't really have a problem with signal-to-noise ratio here, most of the posts that do get posted here are pretty good, and the few that aren't don't get upvoted and most people ignore them without a problem. We have a problem with low total activity, which is almost the exact opposite problem.
My concern around the writing portion of your idea is this: from my point of view, the biggest problem with lesswrong is that the sheer quantity of new content is extremely low. In order for a LessWrong 2.0 to succeed, you absolutly have to get more people spending the time and effort to create great content. Anything you do to make it harder for people to contribute new content will make that problem worse. Especially anything that creates a barrier for new people who want to post something in discussion. People will not want to write content that nobody might see unless it happens to get promoted.
Once you get a constant stream of content on a daily basis, then maybe you can find a way to curate it to highlight the best content. But you need that stream of content and engagement first and foremost or I worry the whole thing may be stillborn.
Right. Maybe not even that; maybe he just didn't have the willpower required to become a doctor on that exact day, and if he re-takes the class next semester maybe that will be different.
So, to get back to the original point, I think the original poster was worried about not having the willpower to give to charity and, if he doesn't have that, worried he also might not have the higher levels of willpower you would presumably need to do something truly brave if it was needed (like, in his example, resisting someone like the Nazis in 1930's Germany.) And he was able to use that fear in order to increase his willpower and give more to charity.
He might not be wrong about beliefs about himself. Just because a person actually would prefer X to Y, it doesn't mean he is always going to rationally act in a way that will result in X. In a lot of ways we are deeply irrational beings, especially when it comes to issues like short term goals vs long term goals (like charity vs instant rewards).
A person might really want to be a doctor, might spend a huge amount of time and resources working his way through medical school, and then may "run out of willpower" or "suffer from a lack of Akrasia" or however you want to put it and not put in the time to study he needs to pass his finals one semester. It doesn't mean he doesn't really want to be a doctor, and if he convinces himself "well I guess I didn't want to be a doctor after all" he's doing himself a disservice when the conclusion he should draw is "I messed up in trying to do something I really want to do, how can I prevent that from happening in the future."
Sure, that's very possible. Just because it didn't work last time doesn't mean it can't work now with better technology.
I think anyone who goes into it now, though, had better have a really detailed explanation for why consumer interest was so low last time, despite all the attention and publicity the "sharing economy" got in the popular press, and a plan to quickly get a significant customer base this time around. Something like this can't work economically without scale, and I'm just not sure if the consumer interest exists.
Yeah, a number of businesses tried it between 2007 and 2010. SnapGoods was probably the best known. This article lists 8; 7 went out of business, and the 8th one is just limping along with only about 10,000 people signed up for it. (And that one, NeighborGoods, only survived after removing the option to rent something.)
https://www.fastcompany.com/3050775/the-sharing-economy-is-dead-and-we-killed-it
There just wasn't a consumer base interested in the idea, basically. Silicon valley loved to talk about it, people loved writing articles about it, but it turns out that nobody could get consumers interested in the service.
From that article, which quotes the person who tried to found one of those companies
There was just one problem. As Adam Berk, the founder of Neighborrow, puts it: “Everything made sense except that nobody gives a shit. They go buy [a drill]. Or they just bang a screwdriver through the wall.”
The other likely outcome seems to be that you keep enough vehicles on hand to satisfy peak demand, and then they just sit quietly in a parking lot the rest of the time.
Probably this.
Then again, it's not all bad, it might be beneficial for the company to get some time between the morning rush hour and the evening rush hour to bring your cars somewhere to be cleaned, to recharge them, do any maintenance and repair, ect. I imagine just cleaning out all the fast food wrappers and whatever out of the cars will be a significant daily job.
It depends on the details. What will happen to traffic? Maybe autonomous cars will be more efficient in terms of traffic, but on the flip side of the coin people may drive more often if driving is more pleasant which might make traffic worse.
Also, if you're using a rental or "uber" model where you rent the autonomous car as part of a service, that kind of service might be a lot better if you're living in a city. It's much easier to make a business model like that work in a dense urban environment, wait times for a automated car to come get you will probably be a lot shorter, ect.
You don't own a drill that sits unused 99,9% of the time, you have a little drone bring you one for an hour for like two dollars.
Just a quick note; people have been predicting exactally this for about 10-15 years because of the internet, and it hasn't happened yet. The "people will rent a hammer instead of buying it" idea was supposed to be the ur-example of the new sharing economy, but it never actually materialized, while instead uber and airB&B and other stuff did. We can speculate about why it didn't happen, but IMHO, it wasn't primarily transport costs.
I think most people would just rather buy tools and keep them around, or perhaps that the cognitive costs of trying to figure out when you should buy a 20 dollar tool vs renting it for 5 dollars are just not worth the effort of calculating for most people, something like that.
This already exists for rich people: If you have a lot of money, you pay for your doctor's cab and have her come to your mansion. But with transport prices dropping sharply, this reaches the mass market.
Hmm, I donno if it'll be cost competitive. If you're a barber, and people come to your shop, maybe you can cut the hair of 10 people in a day. If you have 30-45 minutes of commute time between one person and the next, maybe you can only get 5. And even that's going to be hard; if you have scheduled appointments to show up at 5 different people's homes in one day, and then one of your hair cuts runs long or you hit traffic, suddenly you are late for all of the rest of them that day, and maybe you have to cancel your last one and someone who was sitting at home waiting for a haircut doesn't get one.
There could be a business model here, especially if the service sector continues to expand and diversify as automation eats other non-service jobs, but I think it'll remain a premium service, much more expensive then going to a barber shop or a doctor's office or whatever.
Yeah, that's a fair point.
Sure. Obviously people will always consider trade-offs, in terms of risks, costs, and side effects.
Although it is worth mentioning that if you look at, say, most people with cancer, people seem to be willing to go through extremly difficult and dangerous procedures even to just have a small chance of extending lifespan a little bit. But perhaps people will be less willing to do that with a more vague problem like "aging"? Hard to say.
I don't think it will stay like that, though. Maybe the first commercially available aging treatment will be borderline enough that's it's a reasonable debate if it's worthwhile, but I expect them to continue improve from that point.
I don't believe that my vote will change a result of a presidential election, but I have to behave as if it will, and go to vote.
The way I think of this is something like this:
There is something like a 1 in 10 million chance that my vote will affect the presidential election (and also some chance of my voting affecting other important elections, like Congress, Governor, ect).
Each year, the federal government spends $3.9 trillion dollars. It's influence is probably actually significantly greater then that, since that doesn't include the effect of laws and regulations and such, but let's go with that number for the sake of argument.
If you assume that both parties are generally well-intended and will mostly use most of that money in ways that create positive utility in one way or another, but you think that party A will do 10% more effectively then party B, that's a difference in utility of $390 billion dollars.
So a 1 in 10 million chance of having a 390 billion dollar effect divides into something like an expected utility of $39,000 for something that will take you maybe half an hour. (Plus, since federal elections are only every 2 years, it's actually double that.)
I could be off by an order of magnitude with any of these estimates, maybe you have a 1 in 100 million chance of making a difference, or maybe one party is only 1% better the the other, but it seems from a utilitarian point of view like it's obviously worth doing even so.
The same logic can probably be used for these kind of existential risks as well.
It would certainly have to depend on the details, since obviously many people do not choose the longevity treatments that are already available, like healthy eating and exercise, even though they are usually not very expensive.
Eh. That seems to be a pretty different question.
Let's say that an hour of exercise a day will extend your lifespan by 5 years. If you sleep 8 hours a night, that's about 6.3% of your waking time; if you live 85 years without exercise vs 90 years with exercise, you probably have close to the same amount of non-exercising waking time either way. So if it's worthwhile probably depends on how much you enjoy or don't enjoy exercise, how much you value free time when you're 30 vs time when you're 85, ect.
I think exercise is a good deal all around, but then again that's partly because I think there's a significant chance that we will get longevity treatments in our lifetime, and want to be around to see them. It's not the same kind of clear-cut decision that, say, "take a pill every morning to live 5 years longer" would be.
This is a falsifiable empirical prediction. We will see whether it turns out to be true or not.
Yes, agreed.
I should probably be more precise. I don't think that 100% of people will necessarally choose longevity treatments once they become available. But depending on the details, I think it will be pretty high. A think that a very high percentage of people who today sound ambivalent about it will go to great lengths to get it once it becomes something that exists in reality.
I also think that the concern that "other people" will get to live a very long time and you might not will motivate a lot of people. People are even deeply worried about the fear that rich people might live forever and they might not now, even people who don't seem to really believe that it's possible seem to be worried about that, which is interesting.
I don't think lack of life extension research funding actually comes from people not wanting to live, I think it has more to do with the fact that the vast majority of people don't take it seriously yet and don't beleive that we could actually significantly change our lifespan. That's compounded with a kind of "sour grapes" defensive reflex where when people think they can never get something they try to convince themselves they don't really want it.
I think that if progress is made that at some point there will be a phase change where, when more people start to realize that it is possible and suddenly flip from not caring at all to caring a great deal.
You can use relativity to demonstrate that certain events can happen simultaneity in on reference frame and not in others, but I'm not seeing any way to do that in this case, assuming that the simulated and non-simulated future civilizations are both in the same inertial reference frame. Am I missing something?
That's one of the advantages to what's known as "preference utilitarianism". It defines utility in terms of the preference of people; so, if you have a strong preference towards remaining alive, then remaining alive is therefore the pro-utility option.
The answer to those objections, by the way, is that an "adequately objective" metaethics is impossible: the minds of complex agents (such as humans) are the only place in the universe where information about morality is to be found, and there are plenty of possible minds in mind-design space (paperclippers, pebblesorters, etc.) from which it is impossible to extract the same information.
Elizer attempted to deal with that problem by defining a certain set of things as "h-right", that is, morally right from the frame of reference of the human mind. He made clear that alien entities probably would not care about what is h-right, but that humans do, and that's good enough.
I don't think that's actually true.
Even if it was, I don't think you can say you have a belief if you haven't actually deduced it yet. Even taking something simple like math, you might belief theorem A, theorem B, and theorem C, and it might be possible to deduce theorem D from those three theorems, but I don't think it's accurate to say "you believe D" until you've actually figured out that it logically follows from A, B, and C.
If you've never even thought of something I don't think you can say that you "believe" it..
Except by their nature, if you're not storing them, then the next one is not true.
Let me put it this way.
Step 1: You have a thought that X is true. (Let's call this 1 bit of information.)
Step 2: You notice yourself thinking step 1. Now you say "I appear to believe that X is true." (Now this is 2 bits of information; x and belief in x")
Step 3: You notice yourself thinking step 2. Now you say "I appear to believe that I believe that X is true." (3 bits of information, x, belief in x, and belief in belief in x.)
If at any point you stop storing one of those steps, the next step becomes untrue; if you are not storing, say, step 11 in your head right now (belief in belief in belief....) then step 12 would be false, because you don't actually believe step 11. After all, "belief" is fundamentally a question of your state of mind, and if you don't have state x in your mind, if you've never even explicitly considered stage x, it can't really be a belief, right?
Fair.
I actually think a bigger weakness in your argument is here:
I believe that I believe that I believe that I exist. And so on and so forth, ad infinitum. An infinite chain of statements, all of which are exactly true. I have satisfied Eliezer's (fatuous) requirements for assigning a certain level of confidence to a proposition.
That can't actually be infinite. If nothing else, your brain can not possibly be able to store an infinite regression of beliefs at once, so at some point, your belief in belief must run out of steps.
I think the best possible argument against "I think, therefore I am" is that there may be something either confused or oversimplified about either your definition of "I", your definition of "think", or your definition of "am".
"I" as a concept might turn out to not really have much meaning as we learn more about the brain, for example, in which case the most you could really say would be "Something thinks therefore something thinks" which loses a lot of the punch of the original.
Here's a question. As humans, we have the inherent flexibility to declare that something has either a probability of zero or a probability of one, and then the ability to still change our minds later if somehow that seems warranted.
You might declare that there's a zero probability that I have the ability to inflict infinite negitive utility on you, but if then I take you back to a warehouse where I have banks of computers that I can mathematically demonstrate contain uploaded minds which are going to suffer in the equivalent of hell for an infinite amount of subjective time, you likely would at that point to change your estimate to something greater then zero. But if you actually set the probability to zero, you can't actually do that without violating Bayesian rules, right?
It seems there's a disconnect here; it may be better, in terms of actually using our minds to reason, to be able to treat a probability as if it were 0 or 1, but only because we can later change our minds if we realize we made an error; in which case it probably wasn't actually 0 or 1 to start with in the strictest sense.
Uh. About 10 posts ago I linked you to a long list of published scientific papers, many of which you can access online. If you wanted to see the data, you easily could have.