Posts

Reaching out to people with the problems of friendly AI 2017-05-16T19:30:49.689Z
How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? 2016-04-11T21:26:31.130Z
Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) 2016-01-09T16:42:59.906Z
In what language should we define the utility function of a friendly AI? 2015-04-05T22:14:58.268Z
The Galileo affair: who was on the side of rationality? 2015-02-15T20:52:32.129Z

Comments

Comment by Val on Popular religions suggest extrapolated volition is non-existence and wireheading · 2018-02-14T10:13:53.685Z · LW · GW

I didn't say I had an answer. I only said it can be an interesting dilemma.

Comment by Val on Popular religions suggest extrapolated volition is non-existence and wireheading · 2018-02-13T20:54:38.825Z · LW · GW

That's true, but the change a strong AI would make would be probably completely irreversible and unmodifiable.

Comment by Val on Popular religions suggest extrapolated volition is non-existence and wireheading · 2018-02-13T19:29:51.823Z · LW · GW

This brings up an interesting ethical dilemma. If strong AI will ever be possible, it will be probably designed with the values of what you described as a small minority. Does this this small minority have the ethical right to enforce a new world upon the majority which will be against their values?

Comment by Val on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family · 2017-10-09T21:10:07.905Z · LW · GW

I usually look out for the surveys, but until I opened this article I never even knew there was a survey for this year... so yeah, poor advertising.

Comment by Val on New business opportunities due to self-driving cars · 2017-09-10T18:24:18.418Z · LW · GW

"services that go visit the customer outcompete ones that the customer has to go visit" - and what does this have to do with self-driving cars? Whether the doctor has to actively drive the car to travel to the patient, or can just sit there in the car while the car drives all the way, the same time is still lost due to the travel, and the same fuel is still used up. A doctor or a hairdresser would be able to spend significantly less time with customers, if most of the working day would be taken up by traveling. And what about all the tools which have to be carried inside the customer's house?

And self driving hotel rooms? What, are we in the Harry Potter world where things can be larger in the inside than in the outside?

Comment by Val on Mini map of s-risks · 2017-07-09T09:15:38.935Z · LW · GW

I know about the first one having been mentioned on this site, I've read about it plenty of times, but it was not named as such. Therefore it's advisable if you use a rare term (or especially one made up by you) that you also tell what it means.

Comment by Val on Mini map of s-risks · 2017-07-08T20:46:37.608Z · LW · GW

Could you please put some links to "Hacker's joke" and "Indexical blackmail"? Both use words common enough to not yield obvious results for a google search.

Comment by Val on Any Christians Here? · 2017-06-22T16:54:37.389Z · LW · GW

Another Christian here, raised as a Calvinist, but consider myself more of a non-denominational, ecumenical one, with some very slight deist tendencies.

I don't want to sound rude, but I don't know how to formulate it in a better way: if you think you have to choose between christianity and science, you have a very incomplete information about what Christianity is about, and also incomplete knowledge about the history of science itself. I wonder how many who call themselves Bayesians know that Bayes was a very devout Christian, similar to many other founders of modern science who where also philosophers and theologians.

This "Christianity is the enemy of rational thought" idea seems to be relatively recent, and is probably caused or at least magnified by the handful young earth creationists being very loud.

Why there are so few committed Christians here on this site, can be attributed to, among other factors, to how this community started. Reading the earliest posts, it seems that almost every single one of them was a rant against Christianity. No wonder this community mostly attracted atheists, at least in the beginning.

Christianity doesn't mean, and shouldn't mean, trials after trials to find a mathematical proof of God's existence and a vicious fight against those who claim to have found mathematical proofs of God's non-existence.

I want to converse and debate with rationalists who despite their Bayesian enlightenment choose to remain in the flock.

I would love to speak with them, to know exactly why they still believe and how

I'll try an example to give back at least some part of the feeling. Let's say you enjoy to listen to the songs of birds at dawn. (if you actually don't, then imagine something else, something you enjoy which is not based around rationality. Like the smell of fresh flowers, or your favorite musical instrument, or looking at a great painting)

Would you stop enjoying listening to the singing birds, would you stop finding it beautiful, if someone explained it to you that scientifically, they are just waves formed by ordinary molecules bumping into each other, they are just mechanical vibrations, and you shouldn't find anything more in them? Or would you stop enjoying it if someone pointed out to you that there were some horrible criminals hundreds of years ago on the other side of the planet who also claimed to enjoy listening to the songs of birds? Would you stop enjoying it if someone pointed it out to you that there is no rational explanation why you would find this vibration of the air more beautiful than any other vibration of the air? And, more importantly, would you find the singing of birds suddenly something horrible and disgusting, just because you developed a greater understanding in a scientific topic? (I'm not claiming Christianity is merely a form of thoughts to find pleasure or refuge in, this was only an example of how something which is not based on rationality can be compatible with rationality.)

Comment by Val on Open thread, May 15 - May 21, 2017 · 2017-05-19T09:29:10.108Z · LW · GW

If you make 100 loaves and sell them for 99 cents each, you've provided 1 dollar of value to society, but made 100 dollars for yourself.

Not 99 dollars?

Comment by Val on The 2017 Effective Altruism Survey - Please Take! · 2017-04-28T14:45:10.614Z · LW · GW

Anyone who is reading this should take this survey, even if you don't identify as an "effective altruist".

Why? The questions are too much centered not only on effective altruists, but also on left- or far-left-leaning ideologies. I stopped filling it when it assumed only movements of that single political spectrum are considered social movements.

Comment by Val on How AI/AGI/Consciousness works - my layman theory · 2017-03-09T19:59:48.725Z · LW · GW

Even with the limited AGI with very specific goals (build 1000 cars) the problem is not automatically solved.

The AI might deduce that if humans still exist, there is a higher than zero probability that a human will prevent it from finishing the task, so to be completely safe, all humans must be killed.

Comment by Val on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-15T18:21:48.692Z · LW · GW

Those "very real, very powerful security regimes around the world" are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.

And if you underestimate how much a threat could a mere "computer" be, read the "Friendship is Optimal" stories.

Comment by Val on How to talk rationally about cults · 2017-01-08T23:44:42.825Z · LW · GW

This is a well-presented article, and even though most (or maybe all) of the information is easily available else-where, this is a well-written summary. It also includes aspects which are not talked about much, or which are often misunderstood. Especially the following one:

Debating the beliefs is a red herring. There could be two groups worshiping the same sacred scripture, and yet one of them would exhibit the dramatic changes in its members, white the other would be just another mainstream faith with boring compartmentalizing believers; so the difference is clearly not the scripture itself.

Indeed, the beliefs are not even close to be among the most important aspects of a cult. A cult is not merely a group which believes in something you personally find ridiculous. A cult can even have a stated core belief which is objectively true, or is a universally accepted good thing, like protecting the environment or world peace.

Comment by Val on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-05T21:56:12.634Z · LW · GW

This comment was very insightful, and made me think that the young-earth creationist I talked about had a similar motivation. Despite this outrageous argument, she is a (relatively speaking) smart and educated person. Not academic-level, but neither grown up on the streets level.

Comment by Val on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-03T21:36:27.872Z · LW · GW

I always thought the talking snakes argument was very weak, but being confronted by a very weird argument from a young-earth creationist provided a great example for it:

If you believe in evolution, why don't you grow wings and fly away?

The point here is not about the appeal to ridicule (although it contains a hefty dose of that too). It's about a gross misrepresentation of a viewpoint. Compare the following flows of reasoning:

  • Christianity means that snakes can talk.
  • We can experimentally verify that snakes cannot talk.
  • Therefore, Christianity is false.

and

  • Evolution means people can spontaneously grow wings.
  • We can experimentally verify that people cannot spontaneously grow wings.
  • Therefore, evolution is false.

The big danger in this reasoning is that one can convince oneself of having used the experimental method, or of having been a rationalist. Because hey, we can scientifically verify the claim! - Without realizing that the verified claim is very different from the claims the discussed viewpoint actually holds.

I've even seen many self-proclaimed "rationalists" fall into this trap. Just as many religious people are reinforced by a "pat on the back" from their peers if they say something which is liked by the community they are in, so can people feel motivated to claim they are rationalists if that causes a pat on the back from people they interact with the most.

Comment by Val on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-03T20:58:11.003Z · LW · GW

Isn't this very closely related to the Dunning-Kruger effect?

Comment by Val on If Atheists Had Faith · 2016-11-29T15:37:03.298Z · LW · GW

I'm not surprised Dawkins makes a cameo in it. The theist in the discussion is a very blunt strawman, just as Dawkins usually likes to invite the dumbest theists he can find, who say the stupidest things about evolution or global warming, thereby allegedly proving all theists wrong.

I'm sorry if I might have offended Dawkins, I know many readers here are a fan of him. However, I have to state that although I have no doubts about the values of his scientific work and his competence in his field, he does make a clown of himself with all those stawman attacks against theism.

Comment by Val on What do you actually do to replenish your willpower? · 2016-11-06T10:57:52.390Z · LW · GW

For many people, religion helps a lot in replenishing willpower. Although, what I observed, it's less about stopping procrastination, and more about not despairing in a difficult or depressing situation. I might even safely guess that for a lot of believers this is among the primary causes of their beliefs.

I know that religious beliefs on this site are significantly below the offline average, I didn't want to convince anyone of anything, I just pointed out that for many people it helps. Maybe by acknowledging this fact we might understand why.

Comment by Val on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-25T18:48:25.659Z · LW · GW

we'd only really need the 5 big crops + plants for photosynthesis , insects and impollinators in order to survive and thrive

Time and time it turned out that we underestimated the complexity of the biosphere. And time and time again our meddling backfired horribly.

Even if we were utterly selfish and had no moral objections, wiping out all but a handful of "useful" species would almost certainly lead to unforeseen consequences ending in the total destruction of the planet's biosphere. We did not yet manage to fully map the role each species plays in the natural balance, but it seems like it's very deeply entangled, everything depending on lots of other species. You cannot just remove a handful of them and expect them to thrive on their own.

Comment by Val on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-25T18:17:51.161Z · LW · GW

True, the scenario is not implausible for a non-hostile alien civilization to arrive who are more efficient than us, and in the long term they will out-compete and out-breed us.

Such non-hostile assimilation is not unheard of in real life. It is happening now (or at least claimed by many to be happening) in Europe, both in the form of the migrant crisis and also in the form of smaller countries fearing that their cultural identities and values are being eroded by the larger, richer countries of the union.

Comment by Val on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-25T17:26:30.247Z · LW · GW

I'm surprised to find such rhetoric on this site. There is an image now popularized by certain political activists and ideologically-driven cartoons, which depict the colonization of the Americas as a mockery of the D-Day landing, with peaceful Natives standing on the shore and smiling, while gun-toting Europeans jump out of the ships and start shooting at them. That image is even more false than the racist depictions in the late 19th century glorifying the westward expansion of the USA while vilifying the natives.

The truth is much more complicated than that.

If you look at the big picture, there was no such conquest in America like the Mongol invasion. There wasn't even a concentrated "every newcomer versus every native" warfare. The diverse European nations fought among themselves a lot, the Natives also fought among themselves a lot, both before and after the arrival of the Europeans. Europeans allied themselves with the Natives at least as often as they fought against them. Even the history of the unquestionably ruthless conquistadors like Cortez didn't feature an army of Europeans set out to exterminate a specific ethnicity. He only had a few hundred Europeans with him, and had tens of thousands of Native allies. If you look at the whole history from the beginning, there was no concentrated military invasion with the intent to conquer a continent. Everything happened during a relatively long period of time. The settlements coexisted peacefully with the natives in multiple occasions, traded with each other, and when conflict developed between them it was no more different than any conflict at any other place on the planet. Conflict develops sooner or later, in the new world just as in the old world. Although there certainly were acts of injustice, the bigger picture is that there was no central "us vs them", not in any stronger form than how the European powers fought wars among themselves. The Natives had the disadvantage of the diseases as other commenters have already stated, but also of the smaller numbers, of the less advanced societal structures (the civilizations of the Old World needed a lot of time between living in tribes and developing forms of governments sufficient to lead nations of millions) and of inferior technology. The term out-competed is much more fitting than exterminated, which is a very biased and politically loaded word.

You cannot compare the colonization of the Americas to the scenario when a starfleet arrives to the planet and proceeds with a controlled extermination of the population.

Comment by Val on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-25T14:10:22.158Z · LW · GW

If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it's because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinct, wiping them out was not our goal), some of which we domesticated, and plenty of which we left alone. We also learned that other species besides us have a role in the natural cycle, and it was never in our interest to wipe out other species (unless in rare circumstances, when they were a pest or a dangerous disease vector).

Unless the extraterrestrial species are the only macroscopic life-form on their planet, it's likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.

Comment by Val on Article on IQ: The Inappropriately Excluded · 2016-09-19T23:05:57.702Z · LW · GW

First of all, IQ tests aren't designed for high IQ, so there's a lot of noise there and this is probably mainly noise.

Indeed. If an IQ test claims to provide accurate scores outside of the 70 to 130 range, you should be suspicious.

There are so many misunderstandings about IQ in the general population, ranging from claims like "the average IQ is now x" (where x is different from 100), to claims of a famous scientist having had an IQ score over 200, and claims of "some scientists estimating" the IQ of a computer, an animal, or a fictional alien species. Or things as simple as claiming to calculate an IQ score based on a low number (usually less than 10) of trivia questions about basic geography and names of celebrities.

Comment by Val on The progressive case for replacing the welfare state with basic income · 2016-09-13T15:23:03.780Z · LW · GW

Also, many people on this site seem to have come from a liberal / libertarian upbringing, where it is a very popular trend to believe in. The survey supports this, by presenting support for BI for each political group.

Comment by Val on [Link] How the Simulation Argument Dampens Future Fanaticism · 2016-09-09T21:23:13.237Z · LW · GW

Isn't the "Do I live in a simulation?" question practically indistinguishable from the question "does God exist?", for a sufficiently flexible definition for "God"?

For the latter, there are plenty of ethical frameworks, as well as incentives for altruism, developed during the history of mankind.

Comment by Val on The call of the void · 2016-08-31T18:57:02.468Z · LW · GW

And it seems the community is not interested enough to counter the ten or so accounts which do this... :(

Comment by Val on The call of the void · 2016-08-31T15:36:33.056Z · LW · GW

There is something I don't understand. Are people voting now on the person instead of the article? I see that all of Elo's recent activity is massively down-voted, and some of the posts might have deserved it. But certainly not all. I'm just curious whether if this post has been written by someone else, would it have been similarly down-voted.

It might not be among the core principles of this site, but it's certainly not an uninteresting topic.

Comment by Val on Inefficient Games · 2016-08-24T14:39:16.538Z · LW · GW

In this case, we should really define "coercion". Could you please elaborate what you meant through that word?

One could argue, that if someone holds a gun to your head and demands your money, it's not coercion, just a game, where the expected payoff of not giving the money is smaller than the expected payoff of handing it over.

(Of course, I completely agree with your explanation about taxes. It's just the usage of "coercion" in the rest of your comment which seems a little odd)

Comment by Val on "Is Science Broken?" is underspecified · 2016-08-12T22:32:09.610Z · LW · GW

Parenting might be even worse, with plenty of contradictions between self-proclaimed experts, one claiming something is very important to do, the other claiming you must never do it under any circumstances.

Comment by Val on "Is Science Broken?" is underspecified · 2016-08-12T21:00:54.609Z · LW · GW

Has anyone heard about the book "The egg-laying dog" from Beck-Bornholdt? I don't know about an English translation, I freely translated the title from German. It is a book about fallacies in statistics, research, especially in medicine, written in a style to be comprehensible by the layman.

It discusses at great length the problems plaguing modern research (well, the research of the 1990's when the book was written, but I doubt that very much has changed). For example, the required statistical significance for a publication is much more relaxed than it was a long time ago. Often a p-value of 5% is enough for a publication, so even with perfectly unbiased researchers, without p-fishing or other unethical tricks, there is a huge number of accepted publications around which are utterly rubbish. This is all made much worse by the fact that everyone wants new results, so few researchers can get funding by repeating and verifying already published results (unless the publication in question is on every headline), and also few researchers are inclined (or supported by the system) to publish negative results.

Comment by Val on New Pascal's Mugging idea for potential solution · 2016-08-05T21:56:06.461Z · LW · GW

Let's be conservative and say the ratio is 1 in a billion.

Why?

Why not 1 in 10? Or 1 in 3^^^^^^^^3?

Choosing an arbitrary probability has good chances of leading us unknowingly into circular reasoning. I've seen too many cases of using for example Bayesian reasoning about something we have no information about, which went like "assuming the initial probability was x", getting some result after a lot of calculations, then defending the result to be accurate because the Bayesian rule was applied so it must be infallible.

Comment by Val on Rationality test: Vote for trump · 2016-06-23T19:07:19.924Z · LW · GW

And why should we be utility maximization agents?

Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it's all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.

Would you?

Comment by Val on Rationality test: Vote for trump · 2016-06-23T18:59:44.726Z · LW · GW

There are some people who think punishment and reward work linearly.

If I remember correctly (please correct me if I'm wrong) even Eliezer himself believes that if we assign a pain value in the single digits to very slightly pinching someone so they barely feel anything, and a pain value in the millions to torturing someone with the worst possible torture, then you should choose torturing a thousand people over slightly pinching half of the planet's inhabitants, if your goal was to minimize suffering. With such a logic, you could assign rewards and punishments to anything, and calculate pretty strange things out of that.

Comment by Val on Crazy Ideas Thread · 2016-06-19T00:59:01.457Z · LW · GW

Another problem would be, that unless this system suddenly and magically got applied to the whole world, it would not be competitive. It can't grow from a small set of members because the limits it imposes would hinder those who would have contributed the most to the size and power of the economy. By shrinking your economy, you will become less competitive against those who don't adopt this new system.

Comment by Val on Crazy Ideas Thread · 2016-06-18T15:07:47.442Z · LW · GW

I fear some people will quickly learn how to game the system. No wonder our current society is so complicated, every time a group came up with a simple and brilliant way to create the perfect utopia, it always failed miserably.

(also, try selling your idea to the average voter, I would love to see their faces when you mention "logarithm of total social product")

Comment by Val on rationalfiction.io - publish, discover, and discuss rational fiction · 2016-06-01T21:19:00.201Z · LW · GW

Cars in the 1930's didn't have such crumple zones as modern cars do. Also, in the city they don't move as fast as on the freeway. Even a small difference might decide between life and death.

I would suggest giving the story the benefit of the doubt. It must stay at least somewhat true to the style of the comics, but at the same time explore the world in a more serious and realistic tone. And it manages that quite well, it's worth reading.

Comment by Val on Open Thread May 30 - June 5, 2016 · 2016-05-30T15:19:59.564Z · LW · GW

Imagine that you are literally the first organism who by random mutation achieved a gene for "helping those who help you"

Not all information is encoded genetically. Many kinds of information have to be learned from the parents or from society.

Comment by Val on Open Thread May 23 - May 29, 2016 · 2016-05-25T19:24:26.594Z · LW · GW

One problem I can see at first glance that the article doesn't look like a Wikipedia article, but as a textbook or part of a publication. The goal of a Wikipedia article should be for a wide audience to understand the basics of something, and not a treatise only experts can comprehend.

What you wrote seems to be an impressive work, but it should be simplified (or at least the introduction of it), so that even non-experts can have a chance to at least learn what it is about.

Comment by Val on Hedge drift and advanced motte-and-bailey · 2016-05-02T07:31:17.678Z · LW · GW

It's not only in social sciences where this phenomena is common. The most striking examples I've seen were in medicine. An article is published, for example "supplement xyz slightly reduces a few of the side effects encountered during radiotherapy used in cancer treatment", which is then published in the media and on social networks as "What the medical industry doesn't want you to know: supplement xyz instantly cures all forms of cancer!". And often there is a link to the original publication, but people still believe it and forward it. And what's even more sad, probably many people then buy that supplement and don't seek medical help, believing that it alone will help.

Comment by Val on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-19T21:17:34.439Z · LW · GW

If this would be enough to prove the effectiveness of rain-dancing, then we would develop 30 different styles of rain-dance, test each of them, and with a very high chance we would get p<0.05 on at least one of them.

Sadly, the medical industry is full of such publications, because publishing new ideas is rewarded more than reproducing already published experiments.

Comment by Val on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-19T20:43:34.897Z · LW · GW

Since then I found a partially relevant, but very simple and effective "puzzle".

There are four cards in front of you on the desk. It is known, that every card has a numerical digit on one side, and a letter from the English alphabet on the other side.

You have to verify the theory that "if one side of the card has a vowel, the other side has an even number", and you are only allowed to flip two cards.

The cards in front of you are:

A T 7 2

Which cards will you flip?

(I wrote partially relevant because this is not an example for an unfalsifiable theory. The theory is falsifiable and the puzzle is solvable, the main point is that most people would pick the wrong answer because they will not try to falsify the theory)

Comment by Val on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-12T13:05:06.937Z · LW · GW

I agree, but I see a connection to falsifiability in that most people don't even try to falsify their theories in this game, even if it would be possible.

A much better example than the 2-4-6 game would be one where the most obvious hypothesis was unfalsifiable.

Comment by Val on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-12T03:38:16.223Z · LW · GW

This and Russel's teapot are just unverifiable claims, and not a study of understanding how a system works which would fail because we committed an innocent mistake.

Besides, they have strong ideological undertones, so all they would manage to do is to cater for the ego of those who agree with their ideological implications, and make angry those who don't. They won't really convince anyone.

Comment by Val on The Sally-Anne fallacy · 2016-04-11T21:38:01.892Z · LW · GW

I often encountered (when discussing politics, theology or similar subjective topics) a fallacy which is similar to this one, or maybe it can be seen as the reverse of it.

  • A: ice is hot, therefore 2+2=4
  • B: No, ice is not hot, but even if it was, it still wouldn't be a good proof for 2+2=4
  • A: So you don't believe in the obvious truth that 2+2=4 ?

Also, sometimes A might try to prove 2+2=5 with the same strategy.

Comment by Val on Consider having sparse insides · 2016-04-10T09:14:02.670Z · LW · GW

Not necessarily. One might sincerely believe in the core values promoted by Christianity (Do unto others as you would have them do unto you) without being a biblical literalist. Christianity includes a wide spectrum of views, not only what how some people define it, which might even be just a parody of Christianity.

To summarize it, I don't know her so I cannot judge whether she's just lying for a social benefit or not, but I find it plausible that she might not be lying, or might not behave like this solely as a facade for a social benefit.

Comment by Val on Open Thread April 4 - April 10, 2016 · 2016-04-06T19:58:21.191Z · LW · GW

You are right, I meant bihacking, my mistake.

My concern was based on the observation how the word phobia (especially in cases of homophobia and xenophobia) is increasingly applied to cases of mild dislike, or even to cases of failing to show open support.

Comment by Val on Open Thread April 4 - April 10, 2016 · 2016-04-04T15:01:21.053Z · LW · GW

I fear a time will come when people who don't want to try polyhacking bihacking will be labeled as homophobic. And that will just further dilute the term.

Comment by Val on Lesswrong 2016 Survey · 2016-04-01T14:33:15.820Z · LW · GW

Besides saying that I have taken the survey...

I would also like to mention that the predictions of probabilities of unobservable concepts was the hardest one for me. Of course, there are some in which i believe more than in some others, but still, any probability besides 0% or 100% seems really strange for me. For something like being in a simulation, if I would believe it but have some doubts, saying 99%, or if I would not believe but being open to it and saying 1%, these seem so arbitrary and odd for me. 1% is really huge in the scope of very probable or very improbable concepts which cannot be tested yet (and some may never ever be).

... before losing my sanity in trying to choose the percentages I would find plausible at least a few minutes later, I had to fill them based on my current gut feelings instead of Fermi estimation-like calculations.

Comment by Val on Consider having sparse insides · 2016-04-01T14:23:02.027Z · LW · GW

Please explain what you mean by saying "it is easier to...".

Judging by the examples, for me the opposite seems to be much easier, if we define easiness as how easy it is to identify with a view, select a view, or represent a view among other people.

Do you instead use the term as "it will be more useful for me"? For the average person, it is much easier to identify oneself with a label, because it signifies a loyalty to a well-defined group of people, which can lead to benefits within that group.

Saying "I'm a democrat" or "I'm a liberal" or "I'm a conservative" makes it much easier for other people who also identify with that group to give you recognition, while saying "I am a seeker of accurate world-models, whatever those turn out to be" will probably lead to confusion or even misunderstandings.

Even if we are not talking about expressing your views to others, but to formulate your views for yourself, for most people it seems that labels are still much easier than to come up with their own definitions of beliefs. If we talk about easiness, it's much easier to choose from existing templates than define a custom one.

However, it might happen that I just misunderstood you because of how we interpret the meaning of "easiness".

Comment by Val on What makes buying insurance rational? · 2016-03-31T22:12:36.178Z · LW · GW

Insurance for small consumer products are not rational for the buyer, for the very reasons which were presented in the question. If you can afford the loss of the item, it's better to not buy insurance and just buy the item again in the case it is lost or destroyed. Why insurance companies are still making money out of extended warranties for consumer products, is because they have good marketing and people are not perfectly rational. Gambling, lottery, etc. exist for the same reasons, despite having a negative expected value.

However, if you cannot afford the loss, it is advantageous to buy insurance. There are things which people own but cannot replace on short notice, and suffer greatly if they do lose it. For example, houses, or business-crucial items. You can afford to pay the insurance, but cannot afford losing the item in question. Taking a loan to replace it might be much more expensive than the insurance.

There are situations when losing something might cost you much more than its monetary value. Losing your house might make you homeless. Losing you car, if you require it for your job, might cost you your job. Having an expensive machine you make your living out of, losing it might put you out of business. Not having enough money to afford an expensive operation might cost you your life if you don't have the health insurance which would have paid for it.