Open thread, Apr. 17 - Apr. 23, 2017
post by gilch · 2017-04-18T02:47:46.389Z · LW · GW · Legacy · 145 commentsContents
145 comments
This is the (late) weekly open thread. See the tag. You'd think we could automate this. The traditional boilerplate follows.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
145 comments
Comments sorted by top scores.
comment by MissMarble · 2017-04-20T21:54:18.936Z · LW(p) · GW(p)
Hey guys, I'm fairly new to the rationality community (only at page 350 of the book), but I think I might have experienced a belief in belief in belief. I'm trying not to spend too much time online and this story is a bit embarrassing, but I remember that Eliezer wondered about it so I figured I might as well share.
I have a pretty bad relationship with my father, and I don't think very highly of him. But one thing I notice is that whenever he does something that hurt me/I consider selfish, I'm always scandalized. I tried to figure out why I keep reacting that way, because if you asked me to predict my father's behavior I'll probably come up with something pretty negative. So even if a part of me still hope for a better relationship, It makes no sense for me to be surprised by his behavior.
Then I thought, what if I keep that surprise and anger because a thought of me not being surprised by it, of me being so indifferent to my own father, is monstrous to me? Thinking that I might not be sad at his funeral (not that it's close or anything like that) actually scares me. I don't know how I could live with myself if I truly one hundred percent gave up on my father.
So, it's not that I believe he's a good father, It's not that I believe I should believe he's a good father, It's that I believe I should believe I should believe he's a good father.
To explain: First level of belief – I expect my father to be a good parent. Second level of belief – Believing that my father is a good parent has some benefit, so I'll "believe" it to get the benefit, or the placebo effect of the benefit. Third level of belief – TRYING to believe that my father is a good parent have the benefit of me not having to think about myself as cold hearted. It's making the effort that count, not the result, so it never needed to go as far as changing what I think about my father, or changing what I think I should think. It's not that I think I should think he's a good parent, is that I think I should try to think that.
Or maybe I just haven't truly accepted that that's the way he is. Can you accurately predict a situation and still not accept it? I usually think about the world in terms of "believing in your heart" and "believing in your mind", but shouldn't a complete understanding in your mind also change your heart?
Replies from: Viliam, quiteawhile↑ comment by Viliam · 2017-04-21T09:34:34.431Z · LW(p) · GW(p)
Congratulations; what you wrote here makes a lot of sense! It is probably very frequent that people cling to a belief because of what having this belief means about them. "Am I a good person or a bad person for believing X?"
A word of warning though: we cannot easily revert this stupidity, because it can work both ways. For example, both "I believe in X, because I am a good person" and "I don't believe in X, because I am a sophisticated person" are ultimately about your image. At the end, the only thing relevant to making correct beliefs about X is, well, the evidence about X. Not what it means about us.
Also, words like "bad" are probably too general. Your father can be doing a good thing A, and a bad thing B (and a morally neutral thing C) -- these facts are not mutually exclusive. It might make more sense to be more specific about the ways he disappoints you, and the ways he doesn't.
Replies from: MrMind↑ comment by quiteawhile · 2017-04-20T23:30:49.763Z · LW(p) · GW(p)
Hey you, I was browsing this thread to see if new people maybe post here first if they want to keep low key. But since you're also new I decided to read your comment and get my bearings, I think I might have some insight into what you're experiencing and I'll reply below, but first:
full disclosure/disclaimers English is not my first language; My memory is shit and I'm new to all this so the jargon is beyond me at this point; I'm admittedly ignorant about most things, specially when they feel like I should really really really have known about it before I felt so lonely thinking in this weird way that most people don't get; I'm also slightly inebriated so I might miss the point entirely. I guess what I'm trying to do is warn you is that I don't know enough of anything so this might not be worth your time and I'm sorry about it.
That said, while I think that not wanting to become a monster is a good reason to don't/do almost anything I also think that the first apparently valid conclusion might not be the important one. For example I think you should consider the possibility that you are having a hard time letting go of the hope that your father is a decent human. I think you should possibly think about why would it be bad to expect a shit person to be shit, IMO you should be entitled to pick who gets to affect you emotionally if at all.
Were I in your position I'd open a mind map and brainstorm with myself for a while so I could try and figure it out. Best lucks whatever you decide to do :)
comment by WhySpace_duplicate0.9261692129075527 · 2017-04-20T02:28:01.108Z · LW(p) · GW(p)
TL;DR: What are some movements you would put in the same reference class as the Rationality movement? Did they also spend significant effort trying not to be wrong?
Context: I've been thinking about SSC's Yes, We have noticed the skulls. They point out that aspiring Rationalists are well aware of the flaws in straw Vulcans, and actively try to avoid making such mistakes. More generally, most movements are well aware of the criticisms of at least the last similar movement, since those are the criticisms they are constantly defending against.
However, searching "previous " in the comments doesn't turn up any actual exemples.
Full question: I'd like to know if anyone has suggestions for how to go about doing reference class forcasting to get an outside view on whether the Rationality movement has any better chance of succeeding at it's goals than other, similar movements. (Will EA have a massive impact? Are we crackpots about Cryonics, or actually ahead of the curve? More generally, how much weight should I give to the Inside View, when the Outside View suggests we're all wrong?)
The best approach I see is to look at past movements. I'm only really aware of Logical Positivism, and maybe Aristotle's Lyceum, and I have a vague idea that something similar probably happened in the enlightenment, but don't know the names of any smaller schools of thought which were active in the broader movement. Only the most influential movements are remembered though, so are there good examples from the past ~century or so?
And, how self-critical were these groups? Every group has disagreements over the path forward, but were they also critical of their own foundations? Did they only discuss criticisms made by others, and make only shallow, knee-jerk criticisms, or did they actively seek out deep flaws? When intellectual winds shifted, and their ideas became less popular, was it because of criticisms that came from within the group, or from the outside? How advanced and well-tested were the methodologies used? Were any methodologies better-tested than Prediction Markets, or better grounded than Bayes' theorem?
Motive: I think on average, I use about a 50/50 mix of outside and inside view, although I vary this a lot based on the specific thing at hand. However, if the Logical Positivists not only noticed the previous skull, but the entire skull pile, and put a lot of effort into escaping the skull-pile paradigm, then I'd probably be much less certain that this time we finally did.
Replies from: fubarobfusco, WhySpace_duplicate0.9261692129075527, ChristianKl, Viliam↑ comment by fubarobfusco · 2017-04-20T23:25:27.336Z · LW(p) · GW(p)
Just a few groups that have either aimed at similar goals, or have been culturally influential in ways that keep showing up in these parts —
- The Ethical Culture movement (Felix Adler).
- Pragmatism / pragmaticism in philosophy (William James, Charles Sanders Peirce).
- General Semantics (Alfred Korzybski).
- The Discordian Movement (Kerry Thornley, Robert Anton Wilson).
- The skeptic/debunker movement within science popularization (Carl Sagan, Martin Gardner, James Randi).
General Semantics is possibly the closest to the stated LW (and CFAR) goals of improving human rationality, since it aimed at improving human thought through adopting explicit techniques to increase awareness of cognitive processes such as abstraction. "The map is not the territory" is a g.s. catchphrase.
↑ comment by WhySpace_duplicate0.9261692129075527 · 2017-05-04T09:33:27.999Z · LW(p) · GW(p)
Note to self, in case I come back to this problem: the Vienna Circle fits the bill.
↑ comment by ChristianKl · 2017-04-20T18:01:48.400Z · LW(p) · GW(p)
It's hard to find the reference class because our rationality movement lends it's existence to the internet.
If you take a pre-internet self-development movement like Landmark education it's different in many ways and it would be hard for me to say that it's in the same reference class as our rationality movement.
↑ comment by Viliam · 2017-04-20T14:38:34.945Z · LW(p) · GW(p)
There is always going to be some difference, so I am going to ignore medium-sized differences here and cast a wide net:
- scientists -- obviously, right?
- atheists -- they usually have "reason" as their applause light (whether deservedly or not)
- "social engineers" of all political flavors, including SJWs -- believe themselves to know better than the uneducated folks
- psychoanalysts
- behaviorists
- mathematicians
- philosophers
↑ comment by ChristianKl · 2017-04-20T17:10:48.019Z · LW(p) · GW(p)
atheists -- they usually have "reason" as their applause light (whether deservedly or not)
I think there a variety of different people who are atheists. Marx was an atheist but he's not in the same movement as Richard Dawkins. The same goes for the terms mathematician and philosopher.
Replies from: Viliam↑ comment by Viliam · 2017-04-21T09:21:36.559Z · LW(p) · GW(p)
I mostly agree, however... although Marx is not in the same movement as Dawkins, I think even Marx somewhat belongs to the rationalist reference class (just not in the same way as Dawkins).
But this is merely a question of degree -- when two things are far enough in the thingspace that it doesn't make sense to consider them the same cluster anymore. Dawkins is closer to LW than Marx is, but both are closer than... uhm... people who don't even try to use reason / math / reductionism; so it depends on how closely you zoom in to the picture. I tried to err on the side of inclusion.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-04-21T11:37:26.975Z · LW(p) · GW(p)
The question was about movements. Whether or not someone is in the same movement isn't just a question of whether they are close to each other.
comment by Sandi · 2017-04-22T03:11:04.598Z · LW(p) · GW(p)
The SSC article about omega-6 surplus causing criminality brought to my attention the physiological aspect of mental health, and health in general. Up until now, I prioritized mind over body. I've been ignoring the whole "eat well" thing because 1) it's hard, 2) I didn't know how important it was and 3) there's a LOT of bullshit literature. But since I want to live a long life and I don't want my stomach screwing with my head, the reasonable thing to do would be to read up. I need book (or any other format, really) recommendations on nutrition 101. Something practical, the do's and don'ts of food and research citations to back it up. On a broader note, I want to learn more about biodeterminism, also from a practical perspective. There might be conditions in my environment causing me issues that I don't even know of. It goes beyond nutrition.
Replies from: morganism, Viliam, Lumifer, Benquo, MrCogmor↑ comment by morganism · 2017-04-24T19:27:33.185Z · LW(p) · GW(p)
You Can’t Trust What You Read About Nutrition
http://fivethirtyeight.com/features/you-cant-trust-what-you-read-about-nutrition/
"Some populations today thrive on very few vegetables, while others subsist almost entirely on plant foods. The takeaway, Archer said, is that our bodies are adaptable and pretty good at telling us what we need, if we can learn to listen."
↑ comment by Viliam · 2017-04-24T10:24:08.985Z · LW(p) · GW(p)
How Not to Die, and the videos at https://nutritionfacts.org/
↑ comment by Lumifer · 2017-04-23T17:30:39.662Z · LW(p) · GW(p)
Nutrition is pretty messy. I'd recommend self-experimentation (people are different), but if you want a book, something like Perfect Health Diet wouldn't be a bad start. It sounds a bit clickbaity, but it's a solid book.
↑ comment by Benquo · 2017-04-23T16:33:33.777Z · LW(p) · GW(p)
Holden's Powersmoothie page is a decent short review, not comprehensive, not very detailed.
↑ comment by MrCogmor · 2017-04-23T12:36:05.474Z · LW(p) · GW(p)
Nutrition is taught in colleges to so people become qualified to become accredited dieticians. You should be able to find a decent undergrad textbook on Amazon. If you get used and an edition behind the current one it should be cheap as well.
comment by ebook · 2017-04-18T11:00:42.228Z · LW(p) · GW(p)
https://happinessbeyondthought.blogspot.com/2017/04/can-we-survive-wour-current-os-and.html
Replies from: ChristianKl↑ comment by ChristianKl · 2017-04-18T11:56:57.776Z · LW(p) · GW(p)
Why do you recommend the article?
Replies from: ebook↑ comment by ebook · 2017-04-18T16:36:07.284Z · LW(p) · GW(p)
Because I think it's a worthwhile avenue for investigation, regarding existential risk reduction and self-improvement. Each user here is like a node in a large network, our species, and positive impact can be great if we only changed ourselves. I'm guilty of not doing so. So take what I say with that in mind.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-18T17:26:49.130Z · LW(p) · GW(p)
Because I think it's a worthwhile avenue for investigation
What exactly in that word salad is a worthwhile avenue?
Replies from: ebook↑ comment by ebook · 2017-04-18T18:43:04.194Z · LW(p) · GW(p)
I don't think it's a word salad, since you didn't say what the salad is I will have to guess. Operating System is used for symbolic logic, subject/doing/object and the usage of terms related to operating systems is referred to processes which one undergoes but not literally.
What is worthwhile is:
a) Use software removal tools on the ego/I program and its subroutine "I am this body."
b) Discontinue using and supporting the confirmation bias program.
c) Discontinue using the reciprocal altruism program and replace with an open source version.
d) Use malware removal tools on the "attachments" programs.
e) Discontinue using the "free will, I'm in control" program.
Whereas a) is, since its the first step in this article, arguably most important, for the following to even be a part of the solution, like overcoming bias. Which is gone into detail in this blog post: https://happinessbeyondthought.blogspot.com/2012/08/what-is-direct-path-to-nondual.html
FYI, Gary Weber, Ph.D in an irrelevant field is the writer of the articles, not me, and Ramana Maharshi is a sage from India who Carl Jung, said the following:
Replies from: LumiferSri Ramana is a true son of the Indian earth. He is genuine and, in addition to that, something quite phenomenal. In India he is the whitest spot in a white space. What we find in the life and teachings of Sri Ramana is the purest of India; with its breath of world-liberated and liberating humanity, it is a chant of millenniums...
↑ comment by Lumifer · 2017-04-18T19:27:23.894Z · LW(p) · GW(p)
OK, let me phrase it this way: what is good/new/interesting here besides rephrasing the standard Eastern "path to enlightenment" using programming metaphors?
Replies from: ebook↑ comment by ebook · 2017-04-18T20:27:55.419Z · LW(p) · GW(p)
The connection between our current 'dystopian' present and our default mode of being, if even that, and the possible solution for both now and the future. A mars colony would lead to the same scenario if the default mode isn't changed, for example. But a parallel civilization to Earth won't be outside the grasp of super-intelligence.
Unfortunately it seems unlikely as if any will change, like existential risk researchers. But nonetheless, one should only focus on oneself. That's the mistake I do by telling you this. (In the sense I have not changed my OS yet, and think I know what's right from the same type of being)
Replies from: Lumifer↑ comment by Lumifer · 2017-04-18T20:43:45.611Z · LW(p) · GW(p)
The connection between our current 'dystopian' present and our default mode of being, if even that, and the possible solution for both now and the future.
That's still bog-standard Eastern enlightenment: until you abandon self, you are caught in the wheel of karma where you will suffer; you need to change not the external world, but yourself. Searching for the bull and all that.
All that has been hashed out in the Hinduist/Buddhist tradition for centuries and has been mulled over in the West for a hundred years or so by now. So..?
Replies from: ebook↑ comment by ebook · 2017-04-18T20:57:31.583Z · LW(p) · GW(p)
The 'dystopian' present is outside ones own suffering as well, this article is also about our species survival. The external world is always changed by oneself, all different nodes in this network of individuals, so by changing oneself, one changes the world. By that I mean one has only direct access to brain however that works... and the rest is indirect.
So yes, change yourself for your own suffering, but by changing yourself you change the world (How else would it be? change others? It's still your own doing by a change you made of yourself)
Rarely anyone knows or takes this seriously, not even mindfulness qualifies for a). So, we are in a 'dystopian' present, or rather in the dystopian future of our species before the evolution of this default mode.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-18T21:05:01.229Z · LW(p) · GW(p)
Right, once you achieve enlightenment you are expected to be guided by compassion for all beings and so can choose to re-enter the world as a bodhisattva and help/change the world.
That all is intro to Buddhism, basically (by the way, the programming metaphors don't help). What's special about this particular message? We've been living in Kali Yuga for a while.
Replies from: ebook↑ comment by ebook · 2017-04-18T21:18:52.262Z · LW(p) · GW(p)
Well, using your terms, we all need to achieve enlightenment if our species is to survive, on Earth, Mars, or wherever. We're running out of time, with super-intelligence a few decades away. It starts with myself (or yourself from your perspective), as we are interconnected its the only way to do this. Otherwise everyone will demand others to achieve it and no one to actually do it.
I'm not all that familiar with Buddhism, but probably a lot is distraction and unnecessary, as with all religions there comes dogma alongside it. We need to do this secular, as efficiently as possible and with the help of modern science. By the way, enlightenment (using this term, nondual awakening the actual) was only step one. Overcoming bias etc is undoubtedly necessary as well :/
Replies from: Lumifer↑ comment by Lumifer · 2017-04-19T00:28:20.578Z · LW(p) · GW(p)
we all need to achieve enlightenment if our species is to survive
That doesn't seem obvious to me.
super-intelligence a few decades away
That doesn't seem obvious to me either.
We need to do this secular, as efficiently as possible and with the help of modern science
Well, let me repeat myself. That stuff you posted is pretty standard garden-variety Eastern (that is, Hinduist/Buddhist) enlightenment. If you think it's radically different (science, etc.), pray tell in which ways it's different.
Replies from: ebook↑ comment by ebook · 2017-04-19T09:19:24.020Z · LW(p) · GW(p)
Since super-intelligence isn't the only existential threat I am aware of, I wonder why you don't think 'enlightenment' isn't necessary for our species survival? From the point of view of my brain, under the spell of this default mode, it seems obvious that it at least not beneficial for outcomes that are very complex. For example, for simplicity, if whatever I am, research AGI and there is self-referential thought with attachment to that thought, that's distracting.
I'm not sure what part our default mode is cause or correlation with for example sustainability/excess consumption or unable to change ones mind. I wonder about the following steps in the article, like overcoming bias, and how efficient it is without the necessary change of brain mode. I say 'necessary' from anecdotal, first-person account of the uselessness of this spell and the amount of suffering it entails. But research into the brain and correlations between the spell, behavior and suffering, etc paints a picture which at least makes it seem as if it's probably a good idea to change brain mode.
Well, let me repeat myself. That stuff you posted is pretty standard garden-variety Eastern (that is, Hinduist/Buddhist) enlightenment. If you think it's radically different (science, etc.), pray tell in which ways it's different.
I agree, I went off-topic, so I will continue off-topic: I'm not sure to which extents for example self-inquiry is scientifically validated, or removing attachments, at worst case by psychometrics, a not so objective measurement. But I think brain scans do show something which can be correlated with practices otherwise measured with psychometrics only.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-19T14:48:52.324Z · LW(p) · GW(p)
I wonder why you don't think 'enlightenment' isn't necessary for our species survival?
Why would it be necessary?
it seems obvious that it at least not beneficial for outcomes that are very complex
Doesn't seem obvious to me at all. Could you demonstrate? Using real-life evidence? I suspect there is a fair amount of the, ahem, nirvana fallacy happening here.
sustainability/excess consumption
Is it an existential problem? Sure doesn't look like that to me.
self-inquiry is scientifically validated
That sentence makes no sense.
brain scans do show something which can be correlated with practices
Sure, but so what? Lots of things show up on brain scans. The question is whether what you are talking about is meaningfully different from what Hinduism calls moksha and Buddhism calls nirvana.
Replies from: ebook↑ comment by ebook · 2017-04-19T16:27:26.551Z · LW(p) · GW(p)
Doesn't seem obvious to me at all. Could you demonstrate? Using real-life evidence? I suspect there is a fair amount of the, ahem, nirvana fallacy happening here.
For example, If I am an AGI researcher and I have a self-referential narrative, that's distracting me from the process of writing a paper, however its in majority of cases not there when writing, but during downtime.
The nirvana fallacy "The nirvana fallacy is a name given to the informal fallacy of comparing actual things with unrealistic, idealized alternatives. It can also refer to the tendency to assume that there is a perfect solution to a particular problem. A closely related concept is the perfect solution fallacy."
What constitutes a thing? A paper on AI risk? I didn't make the argument that the paper is equal to a brain state, the paper still has to be written, but how good is the paper between the default consciousness and the nondual awakened consciousness(the network is called task-positive network)? Paper is only an example, imagine every individual does more good for their role and more.
Perfect solution fallacy "The perfect solution fallacy is a related informal fallacy that occurs when an argument assumes that a perfect solution exists or that a solution should be rejected because some part of the problem would still exist after it were implemented."
It is the perfect usefulness, in my opinion to existential risk among others. It's so useful it might as well be the solution in the first place.
Because it's the first step in a chain for the solution. If eating vegetables is a chain in a solution to AGI risk, it's not because its the solution, but because its useful.
Why would it be necessary?
It's the most useful intervention, ever. The expected value is probably not higher for anything else.
Is it an existential problem? Sure doesn't look like that to me.
No you're right, but it's the first thing I could think of that might be correlated with whatever we should call this. However changing ones mind probably is. The excess consumption could be an excessive donation to appropriate scientists, if they chose to do that, because what's left in bliss but not to help others?
That sentence makes no sense.
http://www.sriramanamaharshi.org/wp-content/uploads/2012/12/who_am_I.pdf
Here is what I wonder if it's validated by science to work in the extents in which it is casual to the state which we strive for. Or if it's correlation with psychometrics, like Hood's Mysticism Scale and neural correlates after anecdotal report of completion of the state.
Sure, but so what? Lots of things show up on brain scans. The question is whether what you are talking about is meaningfully different from what Hinduism calls moksha and Buddhism calls nirvana.
Well, science is useful and it can probably help by a large degree. I don't know exactly, but why would it matter? It's more than just reading, its the direct experience that matters. That's what all of these writers probably have had.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-19T17:15:03.975Z · LW(p) · GW(p)
If I am an AGI researcher and I have a self-referential narrative, that's distracting me from the process of writing a paper
That's still not obvious to me. Here you are talking about the ability to focus, basically. This ability doesn't seem to require enlightenment and can be e.g. chemically enhanced.
What constitutes a thing?
Brain state. You're comparing the imperfect, deficient brain states of actual live humans with what you imagine could be possible if only proper enlightenment was achieved. You're comparing something real with something you imagine.
It is the perfect usefulness ... It's the most useful intervention, ever.
You haven't shown that, just asserted. Again, see the nirvana fallacy.
Replies from: ebook↑ comment by ebook · 2017-04-19T18:17:04.824Z · LW(p) · GW(p)
That's still not obvious to me. Here you are talking about the ability to focus, basically. This ability doesn't seem to require enlightenment and can be e.g. chemically enhanced. Brain state. You're comparing the imperfect, deficient brain states of actual live humans with what you imagine could be possible if only proper enlightenment was achieved. You're comparing something real with something you imagine.
Nootropics or stimulants are to be re-dosed and used constantly? Many have side-effects in such a manner, subject to dependence and tolerance. If you are able to enter the flow state or a focused one without, why bother, what happens if you combine both anyway, I don't know?
Besides, the effects of certain stimulants deactivate the default mode and activate the nondual awakened network, like nicotine. https://link.springer.com/article/10.1007/s00213-011-2221-8 Although, it is unknown if it also does it during rest.
The median score on the Hood Mysticism scale (a psychometric) is 154.50 https://drive.google.com/file/d/0B9IyLjPYAVCYYWViNjc0OGItNzBjMi00OTEyLTg4ZjctZDM2Nzk4YzY3NjJl/view page 65 (78 in google docs)
comment by Thomas · 2017-04-18T06:31:23.015Z · LW(p) · GW(p)
A problem to ponder ..
https://protokol2020.wordpress.com/2017/04/18/logic-problem/
Replies from: Gurkenglas, arundelo, Lumifer↑ comment by Gurkenglas · 2017-04-19T11:51:44.788Z · LW(p) · GW(p)
There are uncountably many sentences of countably infinite length. The sentence at hand is countably long and thus cannot contain them all. It cannot even contain two countably infinite strings that don't have a common infinite suffix.
Replies from: Thomas, Thomas↑ comment by Thomas · 2017-04-19T12:04:07.590Z · LW(p) · GW(p)
There are uncountably many infinite strings, if you permit any content whatsoever. But when you have some restrictions, that may be, or may not be the case.
Take for example rational numbers a/b, where 0<=a<b and a and b are naturals. They represent countably infinite number of infinite sequences.
These restrictions here may do the same.
↑ comment by arundelo · 2017-04-18T14:57:55.437Z · LW(p) · GW(p)
This statement has the letter “T” at the beginning; the next two letters are “h” and “i”; which are followed by “s s”; … ; the first letter is then repeated inside double quotes; …
What do the ellipses ("...") mean?
Replies from: Thomas↑ comment by Lumifer · 2017-04-18T14:41:25.170Z · LW(p) · GW(p)
I think the question needs to be cleaned up a bit. In particular, the "uniquely and completely describes every English statement possible" part is iffy. First, this statement is infinite and English statements are finite. Second, this statement specifies that there is the letter "T" at the beginning which is obviously not true for all statements.
Do you want to say that every English statement can be converted to this form?
Replies from: Thomas↑ comment by Thomas · 2017-04-18T15:18:56.103Z · LW(p) · GW(p)
Somewhere in this text, or in this statement, a pause in self describing is declared and taken, and the next statement is described. Maybe only a part of it, then the selfdescribing is continued.
Do you want to say that every English statement can be converted to this form?
No. I am saying that somewhere inside this mammoth statement every finite English statement is described. Which is maybe possible.
But also, what about those which are not finite, like this one?
Replies from: Lumifer, username2↑ comment by Lumifer · 2017-04-18T17:23:07.878Z · LW(p) · GW(p)
So it's a nested structure?
and the next statement is described
Where is this "next statement" coming from?
If this statement is generating them itself you need to get more specific about the rules according to which it operates.
Replies from: Thomas↑ comment by Thomas · 2017-04-18T17:50:39.417Z · LW(p) · GW(p)
I will try.
That English statement we are referring to, is like a program, an algorithm which writes itself. The first thing which is mentioned is its first letter. Then another and another and entire substrings inside quotation marks. Blanks, commas, periods are also described when need to.
This way, this statement is auto-described.
Beside this, it describes some other English statements too. Like this one, here.
Okay, so far?
Replies from: Lumifer↑ comment by Lumifer · 2017-04-18T17:56:40.504Z · LW(p) · GW(p)
is like a program, an algorithm which writes itself ... this statement is auto-described
Sure.
Beside this, it describes some other English statements too. Like this one, here.
Why?
If it's an algorithm that generates text describing its own preceding text, I see no reason for it to generate e.g. "Beside this, it describes some other English statements too."
Or let's make it simpler. Take a word, say "xyzzy". Why would this algorithm ever generate this word?
Replies from: Thomas, Thomas↑ comment by Thomas · 2017-04-18T19:53:04.448Z · LW(p) · GW(p)
If it's an algorithm that generates text describing its own preceding text
It does that, but not exclusively.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-18T19:54:22.905Z · LW(p) · GW(p)
So define your algorithm, then.
Replies from: Thomas↑ comment by Thomas · 2017-04-18T19:59:07.012Z · LW(p) · GW(p)
It just talk and talk about everything you can imagine. Every now and then it describes its more and more distant beginning.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-18T20:39:12.275Z · LW(p) · GW(p)
It just talk and talk about everything you can imagine
That's not a algorithm about which you can ask questions and receive well-defined answers.
Replies from: Thomas↑ comment by Thomas · 2017-04-18T21:43:52.504Z · LW(p) · GW(p)
We all know what a natural language, like English is. And how a text in such a language looks like.
And how it can describe itself, letter by letter, for example.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-19T00:31:42.333Z · LW(p) · GW(p)
We all know what a natural language, like English is
Oh, do we now? Let's try.
Thi sun shines a teeny bit strongir evry day, & tho itil b a long time b4 nybody can c it wif thi naykid I, thi straz 1/2 moovd.
Is this a natural language, English, maybe?
Replies from: gjm, Thomas↑ comment by Thomas · 2017-04-18T18:52:57.567Z · LW(p) · GW(p)
Take a word, say "xyzzy"
It will not. It has no meaning in English.
I see no reason for it to generate e.g. "Beside this, it describes some other English statements too."
Why not? It's so defined, that it can explicitly describes any meaningful sentence in English. The question is only, can it describe all of them?
Replies from: gjm, Lumifer↑ comment by gjm · 2017-04-18T19:06:39.665Z · LW(p) · GW(p)
It has no meaning in English.
Of course it has. It means "Please transport me from the small building to the debris room."
Replies from: entirelyuseless, Thomas↑ comment by entirelyuseless · 2017-04-19T01:33:09.007Z · LW(p) · GW(p)
In the Silicon Dreams text adventure series (a sci-fi series) if you say "xyzzy" a robot will come up to you and assume that you are insane (since you are trying to cast a magical spell) and transport you to the appropriate place for insane people.
↑ comment by Thomas · 2017-04-18T19:50:58.921Z · LW(p) · GW(p)
Good, "xyzzy" has meaning.
Then, like any other word, it can be invoked as:
...;now, the self description is paused, to discuss some words with "x", "y" and "z" inside them ... ;...
Replies from: gjm↑ comment by gjm · 2017-04-20T02:06:19.167Z · LW(p) · GW(p)
Jolly good. But I think your problem is not stated clearly enough to be soluble.
Replies from: Thomas↑ comment by Thomas · 2017-04-20T05:44:13.376Z · LW(p) · GW(p)
Gurkenglas solved it anyway.
Replies from: gjm↑ comment by gjm · 2017-04-20T15:12:36.083Z · LW(p) · GW(p)
Congratulations to Gurkenglas on guessing what you meant, then. Gg's understanding differs from what I assumed, in so far as I was able to figure out your meaning, but the actual content of the puzzle was simple enough that I guess the obfuscation was most of the point.
Replies from: Han↑ comment by Han · 2017-04-20T15:49:21.323Z · LW(p) · GW(p)
I thought Gurkenglas' solution was a lovely discrete math sledgehammer approach. There's a lot of subtly different problems that Thomas could have meant and I think Gurkenglas' approach would probably be enough to tackle most of them.
(Attempting to summarize his proof: Some English sentences, like the one this problem is asking you to dig around in, are countably infinite in length. If some English sentences are countably infinite in length, and any two of them have different infinite suffixes, then there's no way the text of this sentence contains both of them.)
Replies from: gjm, Lumifer↑ comment by gjm · 2017-04-20T20:30:20.881Z · LW(p) · GW(p)
I think your summary is wrong, I'm afraid. The sentence could contain two different infinite suffixes -- e.g., by interleaving them. (Well, maybe; it still isn't clear to me what sort of descriptions Thomas is intending to allow.) The problem isn't that having multiple infinite suffixes is a problem, it's that (at least for certain rather non-standard notions of what a "sentence" is) there are uncountably many different sentences and if they all have to be described separately you're dead.
If you allow one description to cover multiple sentences, though, you can cover all those uncountably many with something countably long. Suppose the words of the language are W1, W2, ..., Wn and suppose a "sentence" is any string of finitely or countably many of them. (That's not true for any actual natural language, of course, but this language does have uncountably many "sentences".) Then you could say: "A sentence consists of the empty string, or: of one of W1,...,Wn followed either by the empty string or by: one of W1,...,Wn followed either by the empty string or by: ......".
You could also, though this is a further step away from the sort of description I think Thomas wants to allow, do that in finite space. In fact, I already did, earlier in the paragraph above.
Replies from: Han↑ comment by Han · 2017-04-21T00:33:04.736Z · LW(p) · GW(p)
I think you're right. I'm badly overlooking a subtlety because I'm narrowing "describe" down to "is a suffix of." But you're right that "describe" can be extended to include a lot of other relationships between parts of the big sentence and little sentences, and you're also right that this argument doesn't necessarily apply if you unconstrain "describe" that way. (I haven't formalized exactly what you can constrain "describe" to mean -- only that there are definitions that obviously make our sledgehammer argument break.)
I think "a sentence can be countably infinite" is implicit from the problem description because the problem implies that our "giant block of descriptions" sentence probably has countably infinite size. (it can't exactly be uncountably infinite)
↑ comment by Lumifer · 2017-04-20T16:06:53.856Z · LW(p) · GW(p)
One of the problems was that Thomas spoke about a "natural language" that we all know what it looks like. And a natural language does not have infinite-length sentences.
Replies from: gjm↑ comment by gjm · 2017-04-20T20:15:36.615Z · LW(p) · GW(p)
Another is that Thomas's question didn't make it clear that each sentence covered had to be called out individually, as opposed to constructing some description that covers exactly the right sentences. Another is that even if we suppose our natural language augmented by allowing infinite sentences, it's not clear that it should allow non-computable infinite sentences.
Replies from: Thomas↑ comment by Thomas · 2017-04-21T07:53:04.984Z · LW(p) · GW(p)
The whole discussion, plus the posted problem, plus the (re)defining the problem itself -- is more interesting than the problem alone.
As I see it, these are my assertions:
- an infinite sentence is possible whenever the infinity is permitted
- with such an infinite sentence you can uniquely describe all the finite sentences
- even if the infinite sentence is selfdescribing
- with such an infinite sentence you can uniquely describe countably infinite number of infinite sentences, too
- there are non-countably many such infinite sentences
- some of them can be described by some finite sentence
- every finite or infinite sentence can be uniquely described by some infinite sentence and also by non-countably many of them
- there may be some finite self describing sentences in English
By selfdescribing I mean the Quine type of a sentence. A self reproducing sentence.
Funny, but the whole set of infinite sentences can be described by just one finite sentence.
↑ comment by Lumifer · 2017-04-18T19:32:24.261Z · LW(p) · GW(p)
It has no meaning in English.
It's so defined, that it can explicitly describes any meaningful sentence in English.
So what is this definition?
As far as I can see, your algorithm starts with a specific statement and then basically recurses forever creating the "mammoth statement". But if you start with a different statement, you'll get a different "mammoth statement". And I still don't see how one single "mammoth statement" will describe any meaningful sentence (and why the constraint on meaningful? meaningful to whom?)
comment by morganism · 2017-04-23T23:27:34.338Z · LW(p) · GW(p)
A new birth control method for men
"Guha’s technique for impairing male fertility relies on a polymer gel that’s injected into the sperm-carrying tubes in the scrotum."
Replies from: MrMind↑ comment by MrMind · 2017-04-26T13:02:54.228Z · LW(p) · GW(p)
I truly don't understand male birth control industry vs condom, which at once protects from STDs and unwanted pregnancy. They all seem suboptimal to me.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-05-01T10:58:11.791Z · LW(p) · GW(p)
Condoms aren't that very effective at preventing pregnancy:
18 out of 100 people who use condoms as their only birth control method will get pregnant each year.
Despite effectiveness many people find it more pleasant to have sex without a condom. They prefer the feeling of the skin touching and they don't want to interrupt the sex act to put on the condom.
comment by morganism · 2017-04-21T22:26:18.017Z · LW(p) · GW(p)
This looks like a great database conversion tool.
https://flowheater.net/en/about
"The aim of FlowHeater is to offer a simple and uniform way to transfer data from one place to another, providing a simple graphical user interface to define the modifications specific to each data target. No programming knowledge is required to use FlowHeater.."
The Fitter automatically undertakes many necessary modifications according to changes of data environment and there is no need to worry about such conversions, this is especially useful when the data source and target are for different locales. (e.g. German date formats are converted to American date formats).
Of course, FlowHeater can also cope with the most diverse conversions of character encoding, which can also be combined in any way desired. e.g. the TextFileAdapter reads codepage 10000 (Macintosh, western European) and everything must be converted to codepage 20773 (IBM mainframe EBCDIC) . Admittedly such a requirement might only arise on rare occasions, but FlowHeater makes this really simple to achieve. Naturally FlowHeater supports all the more commonly encountered codepage groups, including those for MS-DOS, UNIX, Unicode (utf7, utf8, utf16, utf32), and so on.
comment by MaryCh · 2017-04-21T16:03:23.107Z · LW(p) · GW(p)
Just a note for future reference. I am reading an anatomy textbook for students specializing in physical training (future coach's and highschool teachers) and loving it. It is simple, has great imagery without that many images (the section on the muscles that ordinarily tug the thigh inwards but can also help rotate it inwards or outwards makes such a vivid picture, and the one on changes in athletes' diaphragms being more developed and better at keeping their abdominal organs from sliding and putting a load onto the chest cavity when the body is upside down just makes sense).
And I wouldn't have expected the book to be so 'serious', and now I wonder, again, if I am missing out on cheap and solid sources by not looking into applied studies...
Replies from: ChristianKl↑ comment by ChristianKl · 2017-04-21T16:47:05.028Z · LW(p) · GW(p)
PT
I think most of the LW audience doesn't know the abbreviation. I would guess "physical therapy" but it took some thinking.
As I see the subject, anatomy research is extremely underfunded. Universities want to fund research that could produce results that they can resell to big pharma and big pharma has mostly no use for anatomy.
Physical therapy actually has a use for anatomy and therefore their textbooks cover it.
Replies from: MaryChcomment by MrMind · 2017-04-21T10:04:44.164Z · LW(p) · GW(p)
Do you know about the excercise "If you could send only one sentence to your former self in the past, what would it be?"
I think I've finally found mine: "there are only two super-power in real life: courage and hard-work."
That is because upon reflection, I've come to the conclusion that I've spent the majority of my teen years feeling inadequate and day-dreaming about super-power, or becoming a secret agent, etc. Only to discover many years later that almost everybody feels inadequate, that I was quite adequate if only I would try, and that if instead of day-dreaming I would have acted, now I would be in a much happier position.
comment by Pimgd · 2017-04-20T10:50:16.630Z · LW(p) · GW(p)
What's Chesterton's Fence for "Don't play with your food"?
I did some thinking and googling and found that...
- The food might get cold
- The food might go places it shouldn't go, making things dirty (or you might get dirty hands by playing with your food and then things get dirty that way)
- It's disrespectful to the chef (table manners)
- It's annoying to the other people who are eating so please just stop
- Touching the food might not be very hygienic
What reasons am I missing? If you're eating food that doesn't go cold on your own, is playing with your food bad?
Replies from: gjm, Lumifer, Dagon, RolfAndreassen, Viliam↑ comment by gjm · 2017-04-20T15:18:18.582Z · LW(p) · GW(p)
When I say anything like that to my daughter it's usually either (1) because she's playing with her food instead of eating it and we would prefer the meal to be of finite time and actually result in her getting the nutrition she needs, or (2) because what she's doing is annoying to other people at the table, or (3) because doing the same in other situations is likely to (a) annoy people and/or (b) make them think worse of her, which we would prefer to avoid. Note that 3b is (at least partly) a self-fulfilling-prophecy thing: "playing with your food" is socially unacceptable, so people try to stop their children doing it, so it continues to be seen as socially unacceptable. Which is kinda silly, but the fact that it's silly doesn't make it go away. Oh, also (4) because it may end up with the food going on the floor or the table or her clothes, all of which are suboptimal for one reason or another.
I can't think of any particular reason why it should be bad to play with your food if you're on your own, you don't care how long you take, and you're confident of not making a mess.
↑ comment by Lumifer · 2017-04-20T14:52:05.576Z · LW(p) · GW(p)
Not sure there is one.
I see "Don't play with your food" as being in the same category as "Sit still! Don't fidget! Be quiet! Don't touch this!" which are all basically "Don't do anything which might end up with me expending more energy on you" mixed with a dose of "I am your boss so you do as I say".
And yes, I agree with Dagon that parents are often judged by how well-behaved their kids are, so there is pressure to train them to behave as small Victorian-era adults.
Replies from: Pimgd↑ comment by RolfAndreassen · 2017-04-24T01:34:02.366Z · LW(p) · GW(p)
It's disrespectful to people who don't have any food to eat, much less play with. Food is important, and this fact is easily forgotten.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-24T14:54:22.745Z · LW(p) · GW(p)
It's disrespectful to people who don't have any food to eat, much less play with.
Pretty much everything you do in the first world is disrespectful from that point of view. You pick clothes on the basis how fashionable they are? You play games on a computer? You have a pet, YOU GIVE FOOD TO AN ANIMAL?!!
↑ comment by Viliam · 2017-04-20T14:27:08.005Z · LW(p) · GW(p)
It reduces people's motivation to become chefs (the people who are socially permitted to "play" with food). Society needs happy chefs. (Unhappy ones will spit in your food.)
Also, even if you find a way to play with your food safely, there is the meta concern: Think of others, who are less skilled than you (so they cannot play with their food safely), and who will now try to copy your behavior. There may even emerge a social pressure to copy your behavior, if playing with food becomes a socially accepted costly signal of high dexterity or something.
Replies from: Pimgd↑ comment by Pimgd · 2017-04-20T14:52:55.567Z · LW(p) · GW(p)
I am not sure I see or understand the issue that playing with your food is dangerous or anything. Maybe if you start catapulting it or juggling it, but sorting or stacking or making shapes doesn't seem dangerous to me.
I'm also not convinced that people will spit in my food if I play with it -
Hang on, if I write it down like that it just doesn't make any sense at all; First I receive my food and then I play with it, how are they gonna spit in it? Do they watch me and then spit in my desert? Or do they just start spitting in everyone's food (why?! It's not payback if you do it to everyone) pre-emptively?
I can see another version of your first point: Playing with food is for people who are preparing food only, so if you want to play with your food, come help with preparation next time.
Except if I started to make shapes and sorting the alphabet soup spagetti I'd be ladled out of the kitchen for sure.
comment by MrMind · 2017-04-19T07:00:17.528Z · LW(p) · GW(p)
Have you recently changed your estimate about the nearest x-risk?
I ended up to believe that now nuclear war > runaway biotech > UFAI, where > means nearer / more probable than.
Possibly, a global nuclear war would not be existential to the point of obliterating humanity, but setting it back a couple of millennia seems to be negative enough to be classified as existential.
↑ comment by ChristianKl · 2017-04-19T13:12:53.763Z · LW(p) · GW(p)
Given rising biotech/chem/drone capability I don't think that nuclear war is the biggest war related x-risk.
The situation in North Korea might be very bad for South Korea but I don't see it as threatening the global stability.
Trump seems to have been very open for listening to the Chinese president. The fact that he changed his opinion about North Korea within 10 minutes of talking to the Chinese president suggests to me that the Chinese president is quite capable of communicating well with Trump. Trump doesn't care about any sacred values, so I imagine he's willing to make deals when China wants to have some islands.
I don't think the recent events in Syria suggest that war with Russia is likely.
Replies from: MrMind, gilch↑ comment by MrMind · 2017-04-21T09:52:18.346Z · LW(p) · GW(p)
The situation in North Korea might be very bad for South Korea but I don't see it as threatening the global stability.
You mean that you think nuclear escalation unlikely or that, even in case of a nuclear conflict, that it would stay local? On the whole situation I'm using the outside view, since I've no specific knowledge about this side of the globe. But I would gladly read what you have to say.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-04-21T12:10:28.803Z · LW(p) · GW(p)
North Korea can destroy South Korea with nukes or other weapons. It doesn't have the capability to do more global damage.
China itself has no interest in WWIII. North Korea isn't important to them. China has more trade with South Korea than with North Korea. China doesn't want Western troops on the territory of North Korea but I don't think a Trump administration would want to occupy North Korea anyway. If there are Chinese troops in North Korea I don't think there would be any objection from the US.
China might want to have something in exchange but the kind of things that China wants like sovereignty over islands, are negotiable for Trump.
↑ comment by gilch · 2017-04-20T00:10:59.226Z · LW(p) · GW(p)
As much as the media likes to call North Korea crazy, they're not suicidal. But given their self-inflicted weak position, they're more willing to take extreme risks to survive. A miscalculation could escalate to war. Trump isn't exactly stable either. We cannot afford to let them develop second strike capability, or we risk nuclear blackmail. If crushing sanctions don't work, or don't work fast enough, our only option is a preemptive strike. But their defenses are so entrenched that Trump might be tempted to make it a nuclear preemptive strike. At that point, can we trust Russia and China to stay out of it?
We can't just contain North Korea instead. They've proliferated every weapon system they've ever developed for cash. What's to stop them from selling a few to Islamic terrorists, who as non-state actors with no territory are immune to the MAD doctrine? This is an apocalyptic culture willing to fly airplanes into buildings. Don't think they wouldn't use atomic suicide bombers if they had them. Civilization will have absolutely no defense against this until Musk's self-sustaining Mars colony. That such a regime has nukes at all is already intolerable.
And the worst part is, they don't even need missiles for second strike capability. What's to stop them from loading one into a cargo container, sailing the cargo ship into New York under a foreign flag of convenience and setting it off before they even unload it? Or load one into a crate labeled "farm equipment" on the next cargo plane bound for D.C.? They could plant one in every major city, and then set them off by remote control in the event of a U.S. invasion.
Q. How hard is it to smuggle a nuke into the country?
A. Easy, you hide it in the next bale of marijuana.
Why do we think they haven't already smuggled some in? Well, it would be an extreme risk. They might get caught. And then they'd really be in trouble. Any other nuclear power wouldn't risk it. But North Korea has lasted this long by being willing to take extreme risks. Why else? It might not be a credible deterrent until after they set one off on foreign soil to prove they can. But that also risks a war. But then so did shelling a South Korean fishing village. A bioweapon would be even easier to smuggle in. But it's also not a deterrent until they prove it works.
How does this not threaten global stability?
Replies from: ChristianKl↑ comment by ChristianKl · 2017-04-20T12:03:15.850Z · LW(p) · GW(p)
As much as the media likes to call North Korea crazy, they're not suicidal. But given their self-inflicted weak position, they're more willing to take extreme risks to survive.
To make a good analysis of what North Korea is likely to do it's helpful to think about it being made of a leadership of humans instead of being an abstract country that makes decisions.
North Korea is very much driven by different North Korean actors having to signal to each other who stronlgy patriotic they are.
At that point, can we trust Russia and China to stay out of it?
Why would we want China to stay out of North Korea in the case of a war? If they put their troops into North Korea to take control of it, that would be a nice outcome. Neither Russia nor China would be interested in having WWIII.
If crushing sanctions don't work, or don't work fast enough, our only option is a preemptive strike.
What exactly do you try to argue here? That you don't know how an effective North Korea policy that uses other tools than military strikes or sanctions looks like?
Q. How hard is it to smuggle a nuke into the country? A. Easy, you hide it in the next bale of marijuana.
Ports do scan for radioactivity. Uranian is also very heavy which provides further ways to detect that a transport of a nuclear weapon.
I'm also not sure about whether North Korea has the capability of making a decision to deploy a nuclear weapon without US and Chinese intelligence agencies getting to know about it.
It's also not the kind of action that's good for signaling purposes. If agency A in North Korea smuggled the nuke and they want to get adminiration from agency B, they have to share information about it. There's also the risk that information about the location of the nuke leakes and the person responsible for the placement of the nuke get's into problems as a result.
Apart from that the fact that North Korea has nuclear weapons is no recent event that warrants any change.
Replies from: gilch↑ comment by gilch · 2017-04-21T01:14:11.018Z · LW(p) · GW(p)
To make a good analysis of what North Korea is likely to do it's helpful to think about it being made of a leadership of humans instead of being an abstract country that makes decisions.
Indeed, when I say "North Korea" I mean the Kim Family Regime. That's the self-inflicted weak position I mentioned. They have to terrorize and indoctrinate the population to stay in power. Any meaningful reforms are poison to the regime, since they prove its illegitimacy. They've painted themselves in a corner. They have to be evil. That's why we can't just have a peace treaty and end the war.
Why would we want China to stay out of North Korea in the case of a war? If they put their troops into North Korea to take control of it, that would be a nice outcome. Neither Russia nor China would be interested in having WWIII.
The ideal outcome is that South Korea takes over, but yes, if China takes over that's still better than the status quo. I meant "stay out of the nuclear confrontation.". If the U.S. unilaterally uses nukes first, what repercussions does that have for the rest of the world? Would that weaken or strengthen the NPT? The MAD doctrine? Would China use the opportunity to take Taiwan? Would China retaliate (even accidentally) against a U.S. ally (like South Korea) for using nukes so close to its territory? Would that escalate?
What exactly do you try to argue here? That you don't know how an effective North Korea policy that uses other tools than military strikes or sanctions looks like?
And it looks like no-one else does either. We don't have any good options. Containment and "strategic patience" isn't a good option either because the problem is steadily getting worse. North Korea continues to build more weapons. How bad does it have to get? What's the tipping point? That is, at what point will we wish we'd ended the Korean War even at the cost of half of Seoul? The intervention should come before that. But another problem is, we can't get good intelligence. It's an isolated totalitarian state with extensive underground facilities. We can't rely on spies on the ground. We just occasionally learn things from low-ranking defectors. We have spy satellites, but can't see underground from orbit. If our intelligence is that unreliable, then we must intervene at a point long enough before the tipping point to account for our margin of error. What does the end of this story look like?
Ports do scan for radioactivity.
That's a very important point I had not considered, and a possible defense against smuggled nukes. I'm not confident in the technical details though. The alpha and beta radiation is too easily shielded, but at what distance can we distinguish the gamma from background? If it's only a few meters, that's not really helpful. If it's several kilometers, then we could perhaps interdict or sink a cargo ship before it threatens the coast.
But this doesn't apply to a smuggled bioweapon.
Replies from: ChristianKl↑ comment by ChristianKl · 2017-04-21T12:13:47.041Z · LW(p) · GW(p)
Indeed, when I say "North Korea" I mean the Kim Family Regime.
The important thing about the family is that they kill each other. Jang Song-thaek who was rumored to be the defacto leader of North Korea in 2009 died in 2013. Finding yourself at the wrong side of a struggle inside the regime means death.
Kim Kyong-hui is a member of the Kim Family clan but I doubt her first priority is to worry about the US.
And it looks like no-one else does either. We don't have any good options.
There's the option to trade and thus push for information flow between North Korea and the outside world.
But another problem is, we can't get good intelligence.
Without access to classified intelligence this is really hard to tell. There are drones flying around and there's SIGINT intelligence. Information that's communicated electronically inside of North Korea is subject to interception.
If the U.S. unilaterally uses nukes first, what repercussions does that have for the rest of the world?
That's why the U.S. is very unlikely to nuke first. Even under Trump that's unlikely to happen.
↑ comment by fortyeridania · 2017-04-19T08:19:57.648Z · LW(p) · GW(p)
Yes, I have. Nuclear war lost its top spot to antimicrobial resistance.
Given recent events on the Korean peninsula it may seem strange to downgrade the risk of nuclear war. Explanation:
While the probability of conflict is at a local high, the potential severity of the conflict is lower than I'd thought. This is because I've downgraded my estimate of how many nukes DPRK is likely to successfully deploy. (Any shooting war would still be a terrible event, especially for Seoul, which is only about 60 km from the border--firmly within conventional artillery range.)
An actual conflict with DPRK may deter other aspiring nuclear states, while a perpetual lack of conflict may have the opposite effect. As the number of nuclear states rises, both the probability and severity of a nuclear war rise, so the expected damage rises as the square. The chance of accident or terrorist use of nukes rises too.
Rising tensions with DPRK, even without a war, can result in a larger global push for stronger anti-proliferation measures.
Perhaps paradoxically, because (a) DPRK's capabilities are improving over time and (b) a conflict now ends the potential for a future conflict, a higher chance of a sooner (and smaller) conflict means a lower chance of a later (and larger) conflict.
You say:
I ended up to believe that now nuclear war > runaway biotech > UFAI
What was your ranking before, and on what information did you update?
Replies from: gilch, MrMind↑ comment by gilch · 2017-04-19T23:14:06.080Z · LW(p) · GW(p)
Why does antimicrobial resistance rank so high in your estimation? It seems like a catastrophic risk at worst, not an existential one. New antibiotics are developed rather infrequently because they're currently not that profitable. Incentives would change if the resistance problem got worse. I don't think we've anywhere near exhausted antibiotic candidates found in nature, and even if we had, there are alternatives like phage therapy and monoclonal antibodies that we could potentially use instead.
Replies from: fortyeridania↑ comment by fortyeridania · 2017-04-20T04:13:49.812Z · LW(p) · GW(p)
It's true that the probability of an existential-level AMR event is very low. But the probability of any existential-level threat event is very low; it's the extreme severity, not the high probability, that makes such risks worth considering.
What, in your view, gets the top spot?
Replies from: Lumifer, gilch↑ comment by Lumifer · 2017-04-20T04:16:20.063Z · LW(p) · GW(p)
existential-level AMR event
What would that look like? Humanity existed for the great majority of its history without antibiotics.
Replies from: fortyeridania↑ comment by fortyeridania · 2017-04-20T04:31:20.283Z · LW(p) · GW(p)
What would that look like?
Concretely? I'm not sure. One way is for a pathogen to jump from animals (or a lab) to humans, and then manage to infect and kill billions of people.
Humanity existed for the great majority of its history without antibiotics.
True. But it's much easier for a disease to spread long distances and among populations than in the past.
Note: I just realized there might be some terminological confusion, so I checked Bostrom's terminology. My "billions of deaths" scenario would not be "existential," in Bostrom's sense, because it isn't terminal: Many people would survive, and civilization would eventually recover. But if a pandemic reduced today's civilization to the state in which humanity existed for the majority of its history, that would be much worse than most nuclear scenarios, right?
Replies from: Lumifer↑ comment by Lumifer · 2017-04-20T14:38:11.778Z · LW(p) · GW(p)
if a pandemic reduced today's civilization to the state in which humanity existed for the majority of its history
Why would it? A pandemic wouldn't destroy knowledge or technology.
Consider Black Death -- it reduced the population of Europe by something like a third, I think. Was it a big deal? Sure it was. Did it send Europe back to the time when it was populated by some hunter-gatherer bands? Nope, not even close.
Replies from: gjm↑ comment by gjm · 2017-04-20T15:20:20.434Z · LW(p) · GW(p)
We have a lot of systems that depend on one another; perhaps a severe enough pandemic would cause a sort of cascade of collapse. I'd think it would have to be really bad, though, certainly worse than killing 1/3 of the population.
Replies from: Lumifer↑ comment by Lumifer · 2017-04-20T15:46:00.963Z · LW(p) · GW(p)
I am sure there would be some collapse, the question is how long will it take to rebuild. I would imagine that the survivors would just abandon large swathes of land and concentrate themselves. Having low population density overall is not a problem -- look at e.g. Australia or Canada.
But we are now really in movie-plots land. Are you prepared for the zombie apocalypse?
↑ comment by gilch · 2017-04-21T02:01:32.595Z · LW(p) · GW(p)
What, in your view, gets the top spot?
I'm not sure how to rank these if the ordering relation is "nearer / more probable than". Nuclear war seems like the most imminent threat, and UFAI the most inevitable.
We all know the arguments regarding UFAI. The only things that could stop the development of general AI at this point are themselves existential threats. Hence the inevitability. I think we already agree that FAI is a more difficult problem than superintelligence. But we might underestimate how much more difficult. The naiive approach is to solve ethics in advance. Right. That's not going to happen in time. Our best known alternative is to somehow bootstrap machine learning into solving ethics for us without it killing us in the mean time. This still seems really damn difficult.
We've already had several close calls with nukes during the cold war. The USA has been able to reduce her stockpile since the collapse of the Soviet Union, but nukes have since proliferated to other countries. (And Russia, of course, sill has leftover Soviet nukes.) If the NPT system fails due to the influence of rogue states like Iran and North Korea, there could be a domino effect as the majority of nations that can afford it race to develop arms to counter their neighbors. This has arguably already happened in the case of Pakistan countering India, which didn't join the NPT. Now notice that Iran borders Pakistan. How long can we hold the line there?
I should also point out that there are risks worse than even existential, which Bostrom called "hellish", meaning that is a human extinction event would be a better outcome than a hellish one. A perverse kind of near miss with AI is the most likely to produce such an outcome. The AI would have to be friendly enough not to kill us all for spare atoms, and yet not friendly enough to produce an outcome we would consider desirable.
There are many other known existential risks, and probably some that are unknown. I've pointed out that AMR seems like a low risk, but I also think bioweapons are the next most imminent threat after nukes. Nukes are expensive. We can kind of see them coming and apply sanctions. We've developed game theory strategies to make use of the existing weapons unlikely. But bioweapons will be comparatively cheap and stealthy. Even so, I expect any such catastrophe to likely be self limiting. The more deadly an infection, the less it spreads. Zombies are not realistic. There would have to be a long incubation period or an animal reservoir, which would give us time to detect and treat it. One would have to engineer a pathogen very carefully to overcome these many limitations, to get to existential threat level, but most actors motivated to produce bioweapons would consider the self-limiting nature a benefit, to avoid blowback. These limitations are also what makes me think that AMR events are less risk than bioweapons.
↑ comment by MrMind · 2017-04-21T09:55:12.522Z · LW(p) · GW(p)
What was your ranking before, and on what information did you update?
Well, before it was: runaway bioweapon > UFAI > nuclear extinction, but the recent news about the international situation made me update. As I said elsewhere, I'm adopting the outside view on all these subjects, so I will gladly stand corrected.