Posts
Comments
I didn't say I had an answer. I only said it can be an interesting dilemma.
That's true, but the change a strong AI would make would be probably completely irreversible and unmodifiable.
This brings up an interesting ethical dilemma. If strong AI will ever be possible, it will be probably designed with the values of what you described as a small minority. Does this this small minority have the ethical right to enforce a new world upon the majority which will be against their values?
I usually look out for the surveys, but until I opened this article I never even knew there was a survey for this year... so yeah, poor advertising.
"services that go visit the customer outcompete ones that the customer has to go visit" - and what does this have to do with self-driving cars? Whether the doctor has to actively drive the car to travel to the patient, or can just sit there in the car while the car drives all the way, the same time is still lost due to the travel, and the same fuel is still used up. A doctor or a hairdresser would be able to spend significantly less time with customers, if most of the working day would be taken up by traveling. And what about all the tools which have to be carried inside the customer's house?
And self driving hotel rooms? What, are we in the Harry Potter world where things can be larger in the inside than in the outside?
I know about the first one having been mentioned on this site, I've read about it plenty of times, but it was not named as such. Therefore it's advisable if you use a rare term (or especially one made up by you) that you also tell what it means.
Could you please put some links to "Hacker's joke" and "Indexical blackmail"? Both use words common enough to not yield obvious results for a google search.
Another Christian here, raised as a Calvinist, but consider myself more of a non-denominational, ecumenical one, with some very slight deist tendencies.
I don't want to sound rude, but I don't know how to formulate it in a better way: if you think you have to choose between christianity and science, you have a very incomplete information about what Christianity is about, and also incomplete knowledge about the history of science itself. I wonder how many who call themselves Bayesians know that Bayes was a very devout Christian, similar to many other founders of modern science who where also philosophers and theologians.
This "Christianity is the enemy of rational thought" idea seems to be relatively recent, and is probably caused or at least magnified by the handful young earth creationists being very loud.
Why there are so few committed Christians here on this site, can be attributed to, among other factors, to how this community started. Reading the earliest posts, it seems that almost every single one of them was a rant against Christianity. No wonder this community mostly attracted atheists, at least in the beginning.
Christianity doesn't mean, and shouldn't mean, trials after trials to find a mathematical proof of God's existence and a vicious fight against those who claim to have found mathematical proofs of God's non-existence.
I want to converse and debate with rationalists who despite their Bayesian enlightenment choose to remain in the flock.
I would love to speak with them, to know exactly why they still believe and how
I'll try an example to give back at least some part of the feeling. Let's say you enjoy to listen to the songs of birds at dawn. (if you actually don't, then imagine something else, something you enjoy which is not based around rationality. Like the smell of fresh flowers, or your favorite musical instrument, or looking at a great painting)
Would you stop enjoying listening to the singing birds, would you stop finding it beautiful, if someone explained it to you that scientifically, they are just waves formed by ordinary molecules bumping into each other, they are just mechanical vibrations, and you shouldn't find anything more in them? Or would you stop enjoying it if someone pointed out to you that there were some horrible criminals hundreds of years ago on the other side of the planet who also claimed to enjoy listening to the songs of birds? Would you stop enjoying it if someone pointed it out to you that there is no rational explanation why you would find this vibration of the air more beautiful than any other vibration of the air? And, more importantly, would you find the singing of birds suddenly something horrible and disgusting, just because you developed a greater understanding in a scientific topic? (I'm not claiming Christianity is merely a form of thoughts to find pleasure or refuge in, this was only an example of how something which is not based on rationality can be compatible with rationality.)
If you make 100 loaves and sell them for 99 cents each, you've provided 1 dollar of value to society, but made 100 dollars for yourself.
Not 99 dollars?
Anyone who is reading this should take this survey, even if you don't identify as an "effective altruist".
Why? The questions are too much centered not only on effective altruists, but also on left- or far-left-leaning ideologies. I stopped filling it when it assumed only movements of that single political spectrum are considered social movements.
Even with the limited AGI with very specific goals (build 1000 cars) the problem is not automatically solved.
The AI might deduce that if humans still exist, there is a higher than zero probability that a human will prevent it from finishing the task, so to be completely safe, all humans must be killed.
Those "very real, very powerful security regimes around the world" are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.
And if you underestimate how much a threat could a mere "computer" be, read the "Friendship is Optimal" stories.
This is a well-presented article, and even though most (or maybe all) of the information is easily available else-where, this is a well-written summary. It also includes aspects which are not talked about much, or which are often misunderstood. Especially the following one:
Debating the beliefs is a red herring. There could be two groups worshiping the same sacred scripture, and yet one of them would exhibit the dramatic changes in its members, white the other would be just another mainstream faith with boring compartmentalizing believers; so the difference is clearly not the scripture itself.
Indeed, the beliefs are not even close to be among the most important aspects of a cult. A cult is not merely a group which believes in something you personally find ridiculous. A cult can even have a stated core belief which is objectively true, or is a universally accepted good thing, like protecting the environment or world peace.
This comment was very insightful, and made me think that the young-earth creationist I talked about had a similar motivation. Despite this outrageous argument, she is a (relatively speaking) smart and educated person. Not academic-level, but neither grown up on the streets level.
I always thought the talking snakes argument was very weak, but being confronted by a very weird argument from a young-earth creationist provided a great example for it:
If you believe in evolution, why don't you grow wings and fly away?
The point here is not about the appeal to ridicule (although it contains a hefty dose of that too). It's about a gross misrepresentation of a viewpoint. Compare the following flows of reasoning:
- Christianity means that snakes can talk.
- We can experimentally verify that snakes cannot talk.
- Therefore, Christianity is false.
and
- Evolution means people can spontaneously grow wings.
- We can experimentally verify that people cannot spontaneously grow wings.
- Therefore, evolution is false.
The big danger in this reasoning is that one can convince oneself of having used the experimental method, or of having been a rationalist. Because hey, we can scientifically verify the claim! - Without realizing that the verified claim is very different from the claims the discussed viewpoint actually holds.
I've even seen many self-proclaimed "rationalists" fall into this trap. Just as many religious people are reinforced by a "pat on the back" from their peers if they say something which is liked by the community they are in, so can people feel motivated to claim they are rationalists if that causes a pat on the back from people they interact with the most.
Isn't this very closely related to the Dunning-Kruger effect?
I'm not surprised Dawkins makes a cameo in it. The theist in the discussion is a very blunt strawman, just as Dawkins usually likes to invite the dumbest theists he can find, who say the stupidest things about evolution or global warming, thereby allegedly proving all theists wrong.
I'm sorry if I might have offended Dawkins, I know many readers here are a fan of him. However, I have to state that although I have no doubts about the values of his scientific work and his competence in his field, he does make a clown of himself with all those stawman attacks against theism.
For many people, religion helps a lot in replenishing willpower. Although, what I observed, it's less about stopping procrastination, and more about not despairing in a difficult or depressing situation. I might even safely guess that for a lot of believers this is among the primary causes of their beliefs.
I know that religious beliefs on this site are significantly below the offline average, I didn't want to convince anyone of anything, I just pointed out that for many people it helps. Maybe by acknowledging this fact we might understand why.
we'd only really need the 5 big crops + plants for photosynthesis , insects and impollinators in order to survive and thrive
Time and time it turned out that we underestimated the complexity of the biosphere. And time and time again our meddling backfired horribly.
Even if we were utterly selfish and had no moral objections, wiping out all but a handful of "useful" species would almost certainly lead to unforeseen consequences ending in the total destruction of the planet's biosphere. We did not yet manage to fully map the role each species plays in the natural balance, but it seems like it's very deeply entangled, everything depending on lots of other species. You cannot just remove a handful of them and expect them to thrive on their own.
True, the scenario is not implausible for a non-hostile alien civilization to arrive who are more efficient than us, and in the long term they will out-compete and out-breed us.
Such non-hostile assimilation is not unheard of in real life. It is happening now (or at least claimed by many to be happening) in Europe, both in the form of the migrant crisis and also in the form of smaller countries fearing that their cultural identities and values are being eroded by the larger, richer countries of the union.
I'm surprised to find such rhetoric on this site. There is an image now popularized by certain political activists and ideologically-driven cartoons, which depict the colonization of the Americas as a mockery of the D-Day landing, with peaceful Natives standing on the shore and smiling, while gun-toting Europeans jump out of the ships and start shooting at them. That image is even more false than the racist depictions in the late 19th century glorifying the westward expansion of the USA while vilifying the natives.
The truth is much more complicated than that.
If you look at the big picture, there was no such conquest in America like the Mongol invasion. There wasn't even a concentrated "every newcomer versus every native" warfare. The diverse European nations fought among themselves a lot, the Natives also fought among themselves a lot, both before and after the arrival of the Europeans. Europeans allied themselves with the Natives at least as often as they fought against them. Even the history of the unquestionably ruthless conquistadors like Cortez didn't feature an army of Europeans set out to exterminate a specific ethnicity. He only had a few hundred Europeans with him, and had tens of thousands of Native allies. If you look at the whole history from the beginning, there was no concentrated military invasion with the intent to conquer a continent. Everything happened during a relatively long period of time. The settlements coexisted peacefully with the natives in multiple occasions, traded with each other, and when conflict developed between them it was no more different than any conflict at any other place on the planet. Conflict develops sooner or later, in the new world just as in the old world. Although there certainly were acts of injustice, the bigger picture is that there was no central "us vs them", not in any stronger form than how the European powers fought wars among themselves. The Natives had the disadvantage of the diseases as other commenters have already stated, but also of the smaller numbers, of the less advanced societal structures (the civilizations of the Old World needed a lot of time between living in tribes and developing forms of governments sufficient to lead nations of millions) and of inferior technology. The term out-competed is much more fitting than exterminated, which is a very biased and politically loaded word.
You cannot compare the colonization of the Americas to the scenario when a starfleet arrives to the planet and proceeds with a controlled extermination of the population.
If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it's because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinct, wiping them out was not our goal), some of which we domesticated, and plenty of which we left alone. We also learned that other species besides us have a role in the natural cycle, and it was never in our interest to wipe out other species (unless in rare circumstances, when they were a pest or a dangerous disease vector).
Unless the extraterrestrial species are the only macroscopic life-form on their planet, it's likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.
First of all, IQ tests aren't designed for high IQ, so there's a lot of noise there and this is probably mainly noise.
Indeed. If an IQ test claims to provide accurate scores outside of the 70 to 130 range, you should be suspicious.
There are so many misunderstandings about IQ in the general population, ranging from claims like "the average IQ is now x" (where x is different from 100), to claims of a famous scientist having had an IQ score over 200, and claims of "some scientists estimating" the IQ of a computer, an animal, or a fictional alien species. Or things as simple as claiming to calculate an IQ score based on a low number (usually less than 10) of trivia questions about basic geography and names of celebrities.
Also, many people on this site seem to have come from a liberal / libertarian upbringing, where it is a very popular trend to believe in. The survey supports this, by presenting support for BI for each political group.
Isn't the "Do I live in a simulation?" question practically indistinguishable from the question "does God exist?", for a sufficiently flexible definition for "God"?
For the latter, there are plenty of ethical frameworks, as well as incentives for altruism, developed during the history of mankind.
And it seems the community is not interested enough to counter the ten or so accounts which do this... :(
There is something I don't understand. Are people voting now on the person instead of the article? I see that all of Elo's recent activity is massively down-voted, and some of the posts might have deserved it. But certainly not all. I'm just curious whether if this post has been written by someone else, would it have been similarly down-voted.
It might not be among the core principles of this site, but it's certainly not an uninteresting topic.
In this case, we should really define "coercion". Could you please elaborate what you meant through that word?
One could argue, that if someone holds a gun to your head and demands your money, it's not coercion, just a game, where the expected payoff of not giving the money is smaller than the expected payoff of handing it over.
(Of course, I completely agree with your explanation about taxes. It's just the usage of "coercion" in the rest of your comment which seems a little odd)
Parenting might be even worse, with plenty of contradictions between self-proclaimed experts, one claiming something is very important to do, the other claiming you must never do it under any circumstances.
Has anyone heard about the book "The egg-laying dog" from Beck-Bornholdt? I don't know about an English translation, I freely translated the title from German. It is a book about fallacies in statistics, research, especially in medicine, written in a style to be comprehensible by the layman.
It discusses at great length the problems plaguing modern research (well, the research of the 1990's when the book was written, but I doubt that very much has changed). For example, the required statistical significance for a publication is much more relaxed than it was a long time ago. Often a p-value of 5% is enough for a publication, so even with perfectly unbiased researchers, without p-fishing or other unethical tricks, there is a huge number of accepted publications around which are utterly rubbish. This is all made much worse by the fact that everyone wants new results, so few researchers can get funding by repeating and verifying already published results (unless the publication in question is on every headline), and also few researchers are inclined (or supported by the system) to publish negative results.
Let's be conservative and say the ratio is 1 in a billion.
Why?
Why not 1 in 10? Or 1 in 3^^^^^^^^3?
Choosing an arbitrary probability has good chances of leading us unknowingly into circular reasoning. I've seen too many cases of using for example Bayesian reasoning about something we have no information about, which went like "assuming the initial probability was x", getting some result after a lot of calculations, then defending the result to be accurate because the Bayesian rule was applied so it must be infallible.
And why should we be utility maximization agents?
Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it's all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.
Would you?
There are some people who think punishment and reward work linearly.
If I remember correctly (please correct me if I'm wrong) even Eliezer himself believes that if we assign a pain value in the single digits to very slightly pinching someone so they barely feel anything, and a pain value in the millions to torturing someone with the worst possible torture, then you should choose torturing a thousand people over slightly pinching half of the planet's inhabitants, if your goal was to minimize suffering. With such a logic, you could assign rewards and punishments to anything, and calculate pretty strange things out of that.
Another problem would be, that unless this system suddenly and magically got applied to the whole world, it would not be competitive. It can't grow from a small set of members because the limits it imposes would hinder those who would have contributed the most to the size and power of the economy. By shrinking your economy, you will become less competitive against those who don't adopt this new system.
I fear some people will quickly learn how to game the system. No wonder our current society is so complicated, every time a group came up with a simple and brilliant way to create the perfect utopia, it always failed miserably.
(also, try selling your idea to the average voter, I would love to see their faces when you mention "logarithm of total social product")
Cars in the 1930's didn't have such crumple zones as modern cars do. Also, in the city they don't move as fast as on the freeway. Even a small difference might decide between life and death.
I would suggest giving the story the benefit of the doubt. It must stay at least somewhat true to the style of the comics, but at the same time explore the world in a more serious and realistic tone. And it manages that quite well, it's worth reading.
Imagine that you are literally the first organism who by random mutation achieved a gene for "helping those who help you"
Not all information is encoded genetically. Many kinds of information have to be learned from the parents or from society.
One problem I can see at first glance that the article doesn't look like a Wikipedia article, but as a textbook or part of a publication. The goal of a Wikipedia article should be for a wide audience to understand the basics of something, and not a treatise only experts can comprehend.
What you wrote seems to be an impressive work, but it should be simplified (or at least the introduction of it), so that even non-experts can have a chance to at least learn what it is about.
It's not only in social sciences where this phenomena is common. The most striking examples I've seen were in medicine. An article is published, for example "supplement xyz slightly reduces a few of the side effects encountered during radiotherapy used in cancer treatment", which is then published in the media and on social networks as "What the medical industry doesn't want you to know: supplement xyz instantly cures all forms of cancer!". And often there is a link to the original publication, but people still believe it and forward it. And what's even more sad, probably many people then buy that supplement and don't seek medical help, believing that it alone will help.
If this would be enough to prove the effectiveness of rain-dancing, then we would develop 30 different styles of rain-dance, test each of them, and with a very high chance we would get p<0.05 on at least one of them.
Sadly, the medical industry is full of such publications, because publishing new ideas is rewarded more than reproducing already published experiments.
Since then I found a partially relevant, but very simple and effective "puzzle".
There are four cards in front of you on the desk. It is known, that every card has a numerical digit on one side, and a letter from the English alphabet on the other side.
You have to verify the theory that "if one side of the card has a vowel, the other side has an even number", and you are only allowed to flip two cards.
The cards in front of you are:
A T 7 2
Which cards will you flip?
(I wrote partially relevant because this is not an example for an unfalsifiable theory. The theory is falsifiable and the puzzle is solvable, the main point is that most people would pick the wrong answer because they will not try to falsify the theory)
I agree, but I see a connection to falsifiability in that most people don't even try to falsify their theories in this game, even if it would be possible.
A much better example than the 2-4-6 game would be one where the most obvious hypothesis was unfalsifiable.
This and Russel's teapot are just unverifiable claims, and not a study of understanding how a system works which would fail because we committed an innocent mistake.
Besides, they have strong ideological undertones, so all they would manage to do is to cater for the ego of those who agree with their ideological implications, and make angry those who don't. They won't really convince anyone.
I often encountered (when discussing politics, theology or similar subjective topics) a fallacy which is similar to this one, or maybe it can be seen as the reverse of it.
- A: ice is hot, therefore 2+2=4
- B: No, ice is not hot, but even if it was, it still wouldn't be a good proof for 2+2=4
- A: So you don't believe in the obvious truth that 2+2=4 ?
Also, sometimes A might try to prove 2+2=5 with the same strategy.
Not necessarily. One might sincerely believe in the core values promoted by Christianity (Do unto others as you would have them do unto you) without being a biblical literalist. Christianity includes a wide spectrum of views, not only what how some people define it, which might even be just a parody of Christianity.
To summarize it, I don't know her so I cannot judge whether she's just lying for a social benefit or not, but I find it plausible that she might not be lying, or might not behave like this solely as a facade for a social benefit.
You are right, I meant bihacking, my mistake.
My concern was based on the observation how the word phobia (especially in cases of homophobia and xenophobia) is increasingly applied to cases of mild dislike, or even to cases of failing to show open support.
I fear a time will come when people who don't want to try polyhacking bihacking will be labeled as homophobic. And that will just further dilute the term.
Besides saying that I have taken the survey...
I would also like to mention that the predictions of probabilities of unobservable concepts was the hardest one for me. Of course, there are some in which i believe more than in some others, but still, any probability besides 0% or 100% seems really strange for me. For something like being in a simulation, if I would believe it but have some doubts, saying 99%, or if I would not believe but being open to it and saying 1%, these seem so arbitrary and odd for me. 1% is really huge in the scope of very probable or very improbable concepts which cannot be tested yet (and some may never ever be).
... before losing my sanity in trying to choose the percentages I would find plausible at least a few minutes later, I had to fill them based on my current gut feelings instead of Fermi estimation-like calculations.
Please explain what you mean by saying "it is easier to...".
Judging by the examples, for me the opposite seems to be much easier, if we define easiness as how easy it is to identify with a view, select a view, or represent a view among other people.
Do you instead use the term as "it will be more useful for me"? For the average person, it is much easier to identify oneself with a label, because it signifies a loyalty to a well-defined group of people, which can lead to benefits within that group.
Saying "I'm a democrat" or "I'm a liberal" or "I'm a conservative" makes it much easier for other people who also identify with that group to give you recognition, while saying "I am a seeker of accurate world-models, whatever those turn out to be" will probably lead to confusion or even misunderstandings.
Even if we are not talking about expressing your views to others, but to formulate your views for yourself, for most people it seems that labels are still much easier than to come up with their own definitions of beliefs. If we talk about easiness, it's much easier to choose from existing templates than define a custom one.
However, it might happen that I just misunderstood you because of how we interpret the meaning of "easiness".
Insurance for small consumer products are not rational for the buyer, for the very reasons which were presented in the question. If you can afford the loss of the item, it's better to not buy insurance and just buy the item again in the case it is lost or destroyed. Why insurance companies are still making money out of extended warranties for consumer products, is because they have good marketing and people are not perfectly rational. Gambling, lottery, etc. exist for the same reasons, despite having a negative expected value.
However, if you cannot afford the loss, it is advantageous to buy insurance. There are things which people own but cannot replace on short notice, and suffer greatly if they do lose it. For example, houses, or business-crucial items. You can afford to pay the insurance, but cannot afford losing the item in question. Taking a loan to replace it might be much more expensive than the insurance.
There are situations when losing something might cost you much more than its monetary value. Losing your house might make you homeless. Losing you car, if you require it for your job, might cost you your job. Having an expensive machine you make your living out of, losing it might put you out of business. Not having enough money to afford an expensive operation might cost you your life if you don't have the health insurance which would have paid for it.