Posts

Comments

Comment by Marion Z. on Do you have High-Functioning Asperger's Syndrome? · 2022-12-03T00:32:48.863Z · LW · GW

85 is really not that low. It's an entire standard deviation above the usual threshold for diagnosis of intellectual disability. It puts the guy in the 16th percentile. I would not expect that person, who as he says has gone to college and done well there, to have issues writing coherent sentences.

Comment by Marion Z. on Raising the Sanity Waterline · 2022-12-02T21:55:07.126Z · LW · GW

No, around the same level as Socrates. 

We are sure with 99%+ probability both were real people, it would be possible but really difficult to fake all the evidence of their existence. 

We are sure with quite high but lesser probability that the broad strokes of their life are correct: Socrates was an influential philosopher who taught Plato and was sentenced to death, Muhammad was a guy from Mecca who founded Islam and migrated to Medina, then returned to Mecca with his followers. 

We think some of the specific details written about them in history books might be true, but definitely not all of them. Muhammad might have lived in a cave during his young life, and Socrates might have refused to escape from his death sentence, etc.

Comment by Marion Z. on Raising the Sanity Waterline · 2022-12-02T21:43:40.447Z · LW · GW

Anecdotally, I started casually reading Less Wrong/Overcoming Bias when I was 12. I didn't really get it, obviously, but I got it enough to explain some basic things about biases and evidence and probability to an uninitiated person

Comment by Marion Z. on Is Santa Real? · 2022-12-02T21:38:35.279Z · LW · GW

Agreed on the first point, learning about lying is good. On the parenting bit, I'll preface this by saying I don't have kids but this seems like a great way to create a "dark rationalist". I am not perfectly or near-perfectly honest, though I admire people who are and think it's probably a good idea, but rewarding skilled lies as a training tool feels dangerous. 

Neutral on the second point, Santa may in fact be a useful deception but I think there are associated downsides and I don't feel strongly either way.

Absolutism can be useful because parents are supposed to be constants in their childrens' lives, reliable and consistent. Absolute rules such as "I will not say literally false things to my child ever" build a lot of trust, implicit and explicit, especially when you have demonstrated your willingness to adhere to it in situations where you really really don't want to. And parent-child trust is, anecdotally, by far the most influential factor on young adult happiness I have ever seen. 

Comment by Marion Z. on Is Santa Real? · 2022-12-02T19:30:31.918Z · LW · GW

The Aes Sedai have the advantage that Robert Jordan is writing books, and whenever he needs to demonstrate that Aes Sedai can capably mislead while telling the truth, he arranges the circumstances such that this is possible. In real life, seriously deceiving people about most topics on the fly - that is, in a live conversation - without making untrue statements is pretty hard, unless you've prepared ahead of time. It's not impossible, but it's hard enough that I would definitely have a higher baseline of belief in the words of someone who is committed to not telling literal lies.

Comment by Marion Z. on Theism, Wednesday, and Not Being Adopted · 2022-12-02T17:33:35.108Z · LW · GW

Sorry for doing such an insane necro here, and I'll delete if asked, but I don't think this is right at all. Broadly, in the real world, I accept the premise "avoiding listening to opposing positions is bad." I do not believe that "if you really don't think you could stand up to debate with a talented missionary, maybe you aren't really an atheist" because I don't think it scales up. 

I am a human, I have mechanisms for deciding what I believe that are not based on rationality. I have worked very hard to break and adapt some of those mechanisms to align more with rationality, but they still exist.  An arbitrarily good debater/absurdly charismatic person could absolutely, with time, override all of the work that has been done to make me accept things like logic and evidence as the basis for the world. In truth, I'm not sure that such a charismatic or intelligent person exists on Earth, and if they did I don't know why they would want to convince me of these things, but I can imagine a person who would and could. And I do not think that being able to imagine that person means I should stop believing in what I believe, because I am not a perfect rationalist. 

In practice, your answer is almost always right. If Adolf Hitler is charismatic and convincing enough to override your "nazism is bad" belief, you probably didn't hold it very strongly or are not doing rationalism very well, or he is right (just to clarify, he is not). You should expect that he cannot convince you, and if you have a decent reason to read his work you should not avoid it for fear of being convinced. But the argument doesn't generalize 100% of the time, is all I'm saying

Comment by Marion Z. on Newcomb's Problem and Regret of Rationality · 2022-12-02T05:24:08.850Z · LW · GW

But you've perfectly forgotten about the hoodlum, so you will in fact one box. Or, does the hoodlum somehow show up and threaten you in the moment between the scanner filling the boxes and you making your decision? That seems to add an element of delay and environmental modification that I don't think exists in the original problem, unless I'm misinterpreting. 

Also, I feel like by analyzing your brain to some arbitrarily precise standard, the scanner could see 3 things:  You are (or were at some point in the past) likely to think of this solution, you are/were likely to actually go through with this solution, and the hoodlum's threat would, in fact, cause you to two-box, letting the scanner predict that you will two-box.

Comment by Marion Z. on Archipelago and Atomic Communitarianism · 2022-11-01T05:43:48.220Z · LW · GW

I mean I think the obvious answer is that an adult isn't universally entitled to their parents subsidizing their law school tuition. The actual concern is that people can brainwash their kids from a very early age, so they don't see the choices they actually have as legitimate, but I think that's a nearly intractable problem in any system. You could fix it, but only with absurd levels of tyranny.

Comment by Marion Z. on The Least Convenient Possible World · 2022-11-01T04:35:41.266Z · LW · GW

Only replying to a tiny slice of your post here, but the original (weak) Pascal's wager argument actually does say you should pretend to believe even if you secretly don't believe, for various fuzzy reasons such as societal influence, and that maybe God will see that you were trying, and that sheer repetition might make you believe a little bit eventually

Comment by Marion Z. on The Least Convenient Possible World · 2022-11-01T04:31:04.800Z · LW · GW

That seems entirely reasonable, insofar as the death penalty is at all. I don't think we should be going around executing people, but if we're going to then we might as well save a few lives by doing it

Comment by Marion Z. on On Overconfidence · 2022-10-29T07:10:32.343Z · LW · GW

1. What is your probability that there is a god? A: 7%. It seems very unlikely for a lot of very logical reasons, but I think there's some chance the infinite recursion argument is true, and I also have to give some chance to any belief that most humans on earth are convinced of.
2. What is your probability that psychic powers exist? A: 15%. I feel like I'm way off the community standard here, but I think if there is a field of "unscientific" research that has promise, it is parapsychology. Frankly, psychology itself seems to border on pseudoscience some of the time, so I'm not remotely confident in making strong statements about complexities of the human brain. Also, as a gut check, some of the weaker claims of ESP are far easier for me to believe in than a God, and there are a lot more ways ESP could be true.
3. What is your probability that anthropogenic global warming will increase temperatures by at least 1C by 2050?A: 70%. The vast majority of that 30% negation space is based on the idea of an unpredictable green tech explosion. 
4. What is your probability that a pandemic kills at least one billion people in a 5 year period by 2100? A: 10%. Almost all of that 10% is concentrated in an engineered disease. 1 billion is a hell of a lot of people, I don't think a naturally occurring illness has the capability to kill so many with modern medicine.
5. What is your probability that humans land on Mars by 2050?  40%. I don't know very much about space exploration, but I remember NASA making various 10 and 20 year claims over the last ~30 years, and I don't think a lot of them happened. I'm inclined to think we're a bit overconfident about Mars.
6. What is your probability that superintelligent AI (=AI better than almost every human at almost every cognitive task) exists by 2115? A: 70%. There's too much money in the problem for it not to get solved in 100 years, to be frank.

Comment by Marion Z. on Luck based medicine: my resentful story of becoming a medical miracle · 2022-10-27T20:40:27.786Z · LW · GW

The most effective diet for weight loss? Seems plausible. The most effective diet for being healthy, that sounds extremely unlikely. Even if your seven foods are nutritionally complete you're not likely to be eating them in the right balances. Intuitive body regulation sounds good there but in general, our bodies are actually not so good at guessing that kind of thing.

Comment by Marion Z. on Luck based medicine: my resentful story of becoming a medical miracle · 2022-10-27T20:34:12.155Z · LW · GW

Yes. My uncle, who is a doctor working in gastroenterology, was talking about basically the exact same topic last week. He said that they're highly confident a significant number of patients are having entirely or near-entirely psychosomatic illnesses, but it's incredibly difficult to identify when that is specifically happening, and unfortunately due to time and money constraints they have a tendency to just slap the label on difficult cases. We just do not know enough about the human body and how the brain affects it to be confident outside of extremely obvious cases. Even a lot of what we do know is being reexamined in the last two decades due to edge cases being discovered and lack of rigor in earlier testing.

Comment by Marion Z. on Toni Kurz and the Insanity of Climbing Mountains · 2022-07-08T03:01:22.315Z · LW · GW

I’m a casual climber and know a lot of former pros/serious climbers - the death rate is simply staggering. I get that these people just have the drive and can’t imagine not pushing the boundaries even further, but when a single guy can tell me three different stories about watching a fellow climber or paraglider or whatever else they do in the mountains dying in front of him, that sport is too much for me to go further into. I remember reading outdoor magazines about the exploits of the most famous climbers fifteen or twenty years ago, and I look them up now and a solid chunk of them are dead. It’s wild, but there’s something appealing about it in a primal sense.

Comment by Marion Z. on AGI Ruin: A List of Lethalities · 2022-06-12T05:09:46.429Z · LW · GW

Any individual doomsday mechanism we can think of, I would agree is not nearly so simple for an AGI to execute as Yudkowsky implies. I do think that it's quite likely we're just not able to think of mechanisms even theoretically that an AGI could think of,  and one or more of those might actually be quite easy to do secretly and quickly. I wouldn't call it guaranteed by any means, but intuitively this seems like the sort of thing that raw cognitive power might have a significant bearing on.

Comment by Marion Z. on How to Convince Me That 2 + 2 = 3 · 2022-05-24T16:29:38.993Z · LW · GW

I've always quite liked Scott Alexander's answer to the problem of evil. It is absolutely useless as a defense of Abrahamic beliefs in the real world, but is relatively satisfying to an atheist wondering how that question might theoretically be answered by a true god.

In case you're not familiar, the basic idea is that God did create a perfectly good universe full of a near-infinite number of consciousnesses experiencing total bliss at all times - then decided that he wanted more net good to exist, so he made a universe which was almost exactly the same as the first but with one incredibly minor detail changed - making it just slightly less than maximally perfect. So on and so on, because to create an identical universe is not really to create one at all.  After some absurd number of universes, we arrive at ours (this explanation requires that you believe that our universe has more net happiness than suffering, which is admittedly just taken on faith). Ours is definitely closer to balanced between perfectly good and perfectly evil than not, but it still is more good and thus worth creating.

He also implies that people who experience more suffering than happiness in their individual lives might be p-zombies, but I find that to be incredibly weird and have always left it out of explanations to people who might possibly feel that they have had a bad life.

Comment by Marion Z. on The Last Paperclip · 2022-05-18T02:30:37.748Z · LW · GW

Fascinating. I can't help feeling that A escaping D's notice was rather hand-wavey, but then so is D being aligned in the first place and not a paper clip maximiser itself, so I suppose I can't complain too much about that.

Comment by Marion Z. on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-13T04:47:35.773Z · LW · GW

some sort of general value for life, or a preference for decreased suffering of thinking beings, or the off chance we can do something to help (which i would argue is almost exactly the same low chance that we could do something to hurt it). I didn't say there wasn't an alignment problem, just that AGI whose goals don't perfectly align with those of humanity in general isn't necessarily catastrophic. Utility functions tend to have a lot of things they want to maximize, with different weights. Ensuring one or more of the above ideas is present in an AGI is important.

Comment by Marion Z. on Ideal governance (for companies, countries and more) · 2022-04-12T13:28:18.602Z · LW · GW

Fourth, seventh, and 66th out of ~200 is quite good? I agree that there are aspects of all of these nations which are objectionable, particularly China, but corruption seems like an odd example. I think there's a fair argument that the PRC has been extremely successful in many metrics given the position of the nation in 1945 - that China was in extreme poverty and I wouldn't have expected it to improve so quickly. China is undemocratic in many ways in practice, particularly press freedom and freedom of speech, but on a gears level, the system of local and regional governance is a relatively effective democracy.

Comment by Marion Z. on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-11T21:47:15.714Z · LW · GW

I assume that any unrestrained AGI would pretty much immediately exert enough control over the mechanisms through which an AGI might take power (say, the internet, nanotech, whatever else it thinks of) to ensure that no other AI could do so without its permission. I suppose it is plausible that humanity is capable of threatening an AGI through the creation of another, but that seems rather unlikely in practice. First-mover advantage is incalculable to an AGI. 

Comment by Marion Z. on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-11T18:26:56.641Z · LW · GW

Other people have addressed the truth/belief gap. I want to talk about existential risk.

We got EXTREMELY close to extinction with nukes, more than once.  Launch orders in the Cold War were given and ignored or overridden three separate times that I'm aware of, and probably more. That risk has declined but is still present. The experts were 100% correct and their urgency and doomsday predictions were arguably one of the reasons we are not all dead.

The same is true of global warming, and again there is still some risk. We probably got extremely lucky in the last decade and happened upon the right tech and strategies and got decent funding to combat climate change such that it won't reach 3+ degrees deviation, but that's still not a guarantee and it also doesn't mean the experts were wrong. It was an emergency, it still is, the fact that we got lucky doesn't mean we shouldn't have paid very close attention.

The fact that we might survive this potential apocalypse too is not a reason to act like it is not a potential apocalypse. I agree that empirically, humans have a decent record at avoiding extinction when a large number of scientific experts predict its likelihood. It's not a great record, we're like 4-0 depending on how you count, which is not many data points, but it's something. What we have learned from those experiences is that the loud and extreme actions of a small group of people who are fully convinced of the risk is sometimes enough to sufficiently shift the inertia of a large society only vaguely aware of the risk to avoid catastrophe by a hairs breadth. We might need to be that group.

Comment by Marion Z. on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-11T18:15:03.650Z · LW · GW

That seems like extremely limited, human thinking. If we're assuming a super powerful AGI, capable of wiping out humanity with high likelihood, it is also almost certainly capable of accomplishing its goals despite our theoretical attempts to stop it without needing to kill humans. The issue, then, is not fully aligning AGI goals with human goals, but ensuring it has "don't wipe out humanity, don't cause extreme negative impacts to humanity" somewhere in its utility function. Probably doesn't even need to be weighted too strongly, if we're talking about a truly powerful AGI. Chimpanzees presumably don't want humans to rule the world - yet they have made no coherent effort to stop us from doing so, probably haven't even realized we are doing so, and even if they did we could pretty easily ignore it.

"If something could get in the way (or even wants to get in the way, whether or not it is capable of trying) I need to wipe it out" is a sad, small mindset and I am entirely unconvinced that a significant portion of hypothetically likely AGIs would think this way. I think AGI will radically change the world, and maybe not for the better, but extinction seems like a hugely unlikely outcome.