Posts

Comments

Comment by jeronimo196 on The Failures of Eld Science · 2020-07-25T13:54:17.691Z · LW · GW

There is more to productivity than not engaging in pleasurable hobbies. I am willing to extend EY the benefit of the doubt and believe he has done some cost/benefit analysis regarding his time management.

In any case, the point is mute - he is not publishing fiction anymore.

Comment by jeronimo196 on The Failures of Eld Science · 2020-07-15T15:01:24.921Z · LW · GW

If it is any consolation, I remember reading a post or an Author's Note from EY, saying he won't be publishing any new fiction for fear of reputational losses.

This is why we can't have nice things.

Comment by jeronimo196 on The Power of Intelligence · 2020-07-10T03:12:38.647Z · LW · GW

For every mental strength we confidently point to, there will be an excellent physical strength we could also point to as a proximate cause, and vice versa.

I agree with you. I just find the particulars oddly inspiring - even if we are not the fastest land hunters, we are genetically the most persistent. This is a lesson from biology that bears thinking about.

Also, we could point to our physical strengths, but people usually don't. We collectively have this body image of ourselves as being "squishy", big brains compensating for weak, frail bodies. I like disabusing that notion.

Comment by jeronimo196 on Reductive Reference · 2020-06-29T08:30:26.983Z · LW · GW

I see your point. But if water didn’t always boil at the same temperature, why would we bother inventing thermometers?

We have more need to measure the unpredictable than the predictable.

If there was nothing with constant temperature, thermometers would work very differently. My first instinct was to say they wouldn't work at all. But then I remembered the entire field of economics, so your point stands.

Not every one sees things that way. The more hardline claims require the physical map to exclude others.

Good luck with that. I couldn't calculate the behaviour of the quarks in a single hydrogen atom if my life depended on it.

Comment by jeronimo196 on Dark Side Epistemology · 2020-06-29T07:38:18.219Z · LW · GW

Thank you for this discussion.

I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn't be bothered to read him myself, I shouldn't have parroted the interpretations of someone else.

I now have better understanding of your position, which is, in fact, falsifiable.

We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).

I hope I've somewhat clarified my own views. But if not, I don't expect to do better in future comments, so I will bow out.

Again, thank you for the discussion.

Comment by jeronimo196 on Dark Side Epistemology · 2020-06-25T22:29:35.635Z · LW · GW

But note that Linux is a noun and "conscious" is an adjective—another type error—so your analogy doesn't communicate clearly.

Linux is also an adjective - linux game/shell/word processor.

Still, let me rephrase then - I don't need a wet cpu to simulate water. Why would I need a conscious cpu to simulate consciousness?

AFAIK, you are correct that we have no falsifiable predictions as of yet.

Do you expect this to change? Chalmers doesn't. In fact, expecting to have falsifiable predictions is itself a falsifiable prediction. So you should drop the "yet". Only then can you see your position for the null hypothesis it is.

The most obvious problem—that there is no "objective" subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a "boundary" or "experience", but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me.

There is not a single concept, that could not be redefined. If this is a problem, it is not unique to consciousness.

"A process currently running on human brains" -although far from being a complete definition, already gives us some boundaries.

I think you're saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you're fine with it; I'm saying I don't get it.

Suffering is a state of mind. The physical location is the brain.

By stimulating different parts of the brain, we can cause suffering (and even happiness).

Another way to think about it is this - where does visual recognition happen? How about arithmetic? Both required a biological brain for a long, long time.

And for the hipothetical scenario - let's say I am playing CS and I throw a grenade - where does it explode?

But perhaps illusionism's consequences are a problem? In particular, in a future world filled with AGIs, I don't see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering "more" than a human, or than another AGI with different code? (I'm not asking for an answer, just asserting that a problem exists.)

That's only the central problem of all of ethics, is it not? Objective morality? How could you tell if a human is suffering more than another human?

I don't see how qualia helps you with that one. It would be pretty bold to exclude AGIs from your moral considerations, before excluding trees (and qualia has not helped you exclude trees!).

Edit: I now realize your position has little to do with Chalmers. Since you are postulating a qualia particle, which has casual effects, you are a substance dualist. But why rob your position of its falsifiable prediction? Namely - before the question of consciousness is solved, the qualia particle will be found.

Or am I misrepresenting you again?

Comment by jeronimo196 on Reductive Reference · 2020-06-25T12:42:46.817Z · LW · GW

How do you know that water always boils at the same temperature?

I remember reading it somewhere...

I see your point. But if water didn't always boil at the same temperature, why would we bother inventing thermometers?

The moral of the story is not so much that science always works, it's that it works in a way that's more coherentist than foundationalist.

Right. And since science does work, coherentism gets a big boost in probability, right until the sun stop rising every day.

And the downside of coherentism is that you can have more than one equally coherent wordlviews...

But would they work equally well? We value science primarily for giving us results, not for being coherent.

If both views are equally coherent and give us equal result (or the results are unclear as of yet), choosing one would be privileging the hipotesis.

Comment by jeronimo196 on Reductive Reference · 2020-06-25T09:02:13.098Z · LW · GW

Edit: Now I see Sister_Y addressed my point in the very next paragraph, so this entire comment is a reading comprehension fail more than anything.

Necroing:

poke - my friend likes to explain this to his undergrads by asking them how they would verify that a thermometer is accurate (check it against another thermometer, but how do you know that one is accurate . . . etc.) until they figure out that thermometers are only "accurate" according to custom or consensus. Then he asks them how they know their eyes work. And their memories.

Some of them cry.

Go to the beach, light a fire, boil some water. Put the thermometer in the boiled water - does it show 100 degrees Celsius? Still at sea level, put a cup of water in a fridge untill it starts freezing. Put the thermometer in the cup - does it show 0 degrees Celsius?

If yes to both, you have a working thermometer. This way, you don't rely on the consensus of other thermometers. As for the custom of calling a working thermometer accurate - that's what it is for.

Eyes and memory can be similarly tested.

Of course, accepting the results of such tests requires acceptance of induction from the past. Maybe the realization you've faced "Last Thursdayism" for the first time at undergraduate level is something to cry about, but no one actually does.

Lest I sound too smug, rest assured I am not convinced I would've done better before finding Less Wrong.

Comment by jeronimo196 on Is Humanism A Religion-Substitute? · 2020-06-19T09:09:17.998Z · LW · GW
A good "atheistic hymn" is simply a song about anything worth singing about that doesn't happen to be religious.

No, that's a good non-religious song. Without religion there would be no atheism, only the much broader scepticism. Atheism is a response to religion - to be considered "atheistic", a song could not avoid the topic. (Alternatively, we'd have to consider "Fear of the dark" a great aspiderman song).

The best atheistic song I've heard is "Dear God" by XTC - the last prayer of many a new atheist, who've lost faith, but not yet the habit of praying:


Dear God, hope you get the letter and
I pray you can make it better down here
I don't mean a big reduction in the price of beer
But all the people that you made in your image
See them starving on their feet
'Cause they don't get enough to eat from God
I can't believe in you

Dear God, sorry to disturb you but
I feel that I should be heard loud and clear
We all need a big reduction in amount of tears
And all the people that you made in your image
See them fighting in the street
'Cause they can't make opinions meet about God
I can't believe in you

Did you make disease and the diamond blue?
Did you make mankind after we made you?
And the Devil too!

Dear God don't know if you noticed but
Your name is on a lot of quotes in this book
And us crazy humans wrote it, you should take a look
And all the people that you made in your image
Still believing that junk is true
Well I know it ain't, and so do you
I can't believe in
I don't believe

I won't believe in heaven or hell
No saints, no sinners, no devil as well
No pearly gates, no thorny crown
You're always letting us humans down
The wars you bring, the babes you drown
Those lost at sea and never found
And it's the same the whole world 'round
The hurt I see helps to compound
The Father, Son and Holy Ghost
Is just somebody's unholy hoax
And if you're up there you'll perceive
That my heart's here upon my sleeve
If there's one thing I don't believe in

It's you



And yes, there is a very clear god-shaped void, the disappointment of a promise unfulfilled.

There is also an "epic vocals" cover by Lawless (feat. Sydney Wayser) that is more hymn-like - prettier, but less angry. Both are worth listening to.


[Edited: fromatting]

Comment by jeronimo196 on Explaining vs. Explaining Away · 2020-06-17T12:00:29.198Z · LW · GW

Because I believe things are what they are. Therefore if I introspect and see choice, then it really truly is choice. The other article might explain it, but an explanation can not change what a thing is, it can only say why it is.

An example of mind projection fallacy so pure, even I could recognise it. Ian believes "he believes things are what they are". If Ian actually believed things are what they are, he would possess unobtainable level of rationality and we would do well to use him as an oracle. In reality, Ian believes things are what they seem to be (to him), which is understandable, but far less impressive.

Comment by jeronimo196 on Dark Side Epistemology · 2020-06-16T16:09:43.534Z · LW · GW

I think of consciousness as a process (software) run on our brains (wetware), with the theoretical potential to be run on other hardware. I thought you understood my position. Asking me to pinpoint the hardware component which would contain suffering, tells me you don't.

To me, saying the cpu (or the gpu) is conscious sounds like saying the cpu is linux - this is a type error. A pc can be running linux. A pc cannot actually be linux, even if "running" is often omitted.

But if one doesn't know "running" is omitted, one could ask where does the linux-ness come from, if neither the cpu nor the ram are themselves linux.

If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.

But it does know to interact with mammals and not with trees and diamonds? ... Argh! You know what, screw it. This is like arguing how many angels can sit on top of a needle. Occam's razor says not to.

Does it pay rent in anticipation?

It pays rent in sensation.

Without falsifiable predictions, we have no way to difirentiate a true ad-hoc explanation from a false one. Also, a model with no predictive powers is useless. Its only "benefit" would be to provide piece of mind as a curiosity stopper. (See https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences.)

I have a first-person subjective experience and I am unable to believe that it is only an abstraction.

I honestly don't see the disconnect. I don't think the existence of a conscious AGI would invalidate my subjective experiences in the slightest. The explanation is always mundane ("only an abstraction" ?), that doesn't detract from the beauty of the phenomenon. (See https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real).

(Otherwise I probably would have turned atheist much sooner.)

I believe you are right. Many people cite subjective personal experiences as their reason for being religious. This does make me doubt our ability to draw correct conclusions based on such.

Comment by jeronimo196 on Dark Side Epistemology · 2020-06-15T08:36:14.521Z · LW · GW

Using the word "yes" to disagree with me is off-putting.

Noted. Thank you for pointing this out.

I wasn't talking about the GPU.

Good to have that clarified.

... but I would be much more concerned about animal suffering than about my AMD Ryzen 5 3600X suffering.

Huh? I am now confused.

By the way, where will the suffering be located? Is it in the decode unit?...

Pain signals are processed by the brain and suffering happens in the mind. So, theoretically, the suffering would be happening in the mind running on top of the simulated cortex, inside the matrix. All the hardware would be necessary to run the simulation. The hardware would not be experiencing the simulation. Just as individual electrons are not seeing red.

I never said I rejected reductionism.

I misunderstood then - you do seem unhappy with the standard reductionist model's position on emotions and experiences as states of mind.

I reject illusionism.

What do you mean by "illusionism"? Is it only the belief that AGI or a mind upload could be conscious? Or is there more to it?

Quite the opposite. A magical particle would be one that is inexplicably compatible with any and every representation of human-like consciousness (rocks, CPUs of arbitrary design) - with the term "human-like" also remaining undefined. I make no claims as to its size. I claim only that it is not an abstraction, and that therefore known physics does not seem to include it.

And how do you know that? Why do you think this unknown particle is not compatible with rocks and CPUs? Is it because you get to define its behaviour precisely as you need to answer a philosophical question a certain way?

What evidence would it take to falsify your belief in this primitive particle? What predictions does it allow you to make? Does it pay rent in anticipation?

Comment by jeronimo196 on Dark Side Epistemology · 2020-06-11T13:44:34.345Z · LW · GW

"something-it-is-like to be a thing"

Ok, I could decipher this as a vague stand in for experience. I would much prefer something like "the ability to process information about the environment and link it to past memories", but to each their own.

"the element of experience which, according to the known laws of physics, does not exist".

Uhm... Are you banking on a revolution in the field of physics? And later you even show exactly how reductionism not only permits, but also explains our experiences.

So in the standard reductionist model, there is no meaningful difference between minds and airplanes;

Yes, there is. One has states of mind and the other doesn't. How meaningful this difference is depends on your position on nihilism.

a mind cannot feel anything for the same reason an airplane or a computer cannot feel anything.

Wrong! The end of your paragraph shows why this is a wrong description of reductionism.

A mind is simply a machine with unusual mappings from inputs to outputs. Redness, cool breezes, pleasure, and suffering are just words that represent states which are correlated with past inputs and moderate the mind's outputs.

Yes. Exactly. Pleasure and suffering are just words, but the states of mind they represent are very much real.

It seems impossible for a quark (electron, atom) or photon to be aware it is inside a mind.

Correct - particals lack the computational power to know anything. Minds, on the other hand, can know they are made of particles. This is not a problem for reductionism. Actually, explaining how simple particles' interactions lead to observed phenomena on the macro level is the entire point.

Some would claim this AGI is "phenomenally conscious"; I claim it is not, since the hardware can't "know" it's running an AGI any more than it "knows" it is running a text editor inside a web browser on lesswrong.com.

Yes, no one would call your GPU conscious. The AGI is the software, though. The AGI could entertain the hipotesis it lives in a simulation, even before discovering any hard evidence. Much like we do. Depending on its code, it could have states of mind similar to a human and then I would not hesitate to call it conscious.

How willing would you be to put such an AGI in the state of mind described by reductionists as "pain", even if it is simply a program run on hardware?

but a primitive of some sort that exists in addition to the quarks that embody the state, and interacts with those quarks somehow.

If such a primitive does interact with quarks, we will find it.

I expect this primitive will, like everything else in the universe, follow computable rules

And then we have yet another particle. How is that different from reductionism?

so it will not associate itself with any arbitrary representation of a state, such as my single-threaded AGI or an arrangement of rocks.

Ah, it's a magical particle. It is smaller than an electron, yet it interacts with the quarks in the brain, but not those in the carbon of a diamond. Or is it actually big, remote and intelligent on its own (unlike electrons)? So intelligent it knows exactly what to interact with, and exactly when, so as to remain undetected?

If you are not postulating a god, you are at the very least postulating a soul under a new name.

See, once you step outside the boundaries of mundane physics, you get very close to teology very fast.

Comment by jeronimo196 on Dark Side Epistemology · 2020-05-29T12:12:11.634Z · LW · GW

Yes. Once I define qualia as "conscious experience", I necessarily have to leave it out of the definition of "consciousness" (whatever that may be).

My point is that only the question of consciousness remains. And consciousness is worth talking about only if human brains exhibit it.

I am not trying to solve the question of qualia, I am trying to dissolve it as improper.

P.S. Do you mind tabooing "qualia" in any further discission? This way I can be sure we are talking about the same thing.

Comment by jeronimo196 on Dark Side Epistemology · 2020-05-29T10:59:43.905Z · LW · GW

Ok. I am still unsure of your position. Do you think other people have experiences, but we cannot say if those are conscious experiences? Or are you of the opinion we cannot say anyone has any kind of experiences? Could you please taboo "qualia", so I know we are not talking about different things entirely?

Comment by jeronimo196 on Perpetual Motion Beliefs · 2020-05-19T16:45:32.559Z · LW · GW

Oops! Beautiful. Your comment described my implicit assumptions probably better than I could, before showing me the error in my thinking. I will have to try and accept the consequences of QM on a deeper level. "It all adds up to normality" is a weak consolation, if you happen to be far from the median after all. It's also becoming blindingly obvious I should finally just sit down and read Feynman.

Huh. The universe is non-deterministic after all. Like, for real. I knew May was going way too peacefully.

Edit: Forgot to say: Thank you for that!

Comment by jeronimo196 on Perpetual Motion Beliefs · 2020-05-18T05:52:28.401Z · LW · GW

From what I understand, the Many Worlds interpretation of quantum physics is deterministic - everything possible does happen. It only seems probabilistic from inside one of the worlds. You can't predict the outcome of a quantum event, since different instances of you will observe all possible outcomes. Take with a grain of salt, since Many Words is unproven (possibly unprovable?) and my understanding is surface level at best.

On the macro level, a coin toss of a fair coin becomes predictable if you have perfect knowledge and enough computational power. The point of probabilities and statistics is that they give us the rules for mapmaking with imperfect knowledge and limited computational power.

In short, a deterministic universe doesn't lead to certainty in our maps - hence "probability may be a "subjective state of belief", but the laws of probability are harder than steel."

Comment by jeronimo196 on New Improved Lottery · 2020-05-18T03:15:17.442Z · LW · GW

Bitcoin might be a desperate get-rich-quick scheme. However, the odds are not as small as Eliezer's lottery. Also, some people use it to purchase illegal goods and services, so there's that. There are similarities, but there are also important differences. Also, there is an upper limit to how much you can lose with the lottery - not so with crypto.

In short, crypto currencies are similar to Eliezer's lottery only to the extent that all day trading is gambling. Which is true often enough, but not always.

Comment by jeronimo196 on Words as Mental Paintbrush Handles · 2020-05-06T09:32:18.196Z · LW · GW

The article is asserting that there are people who can construct mental images, not that all people can construct mental images.

Comment by jeronimo196 on Extensions and Intensions · 2020-04-21T08:10:42.325Z · LW · GW

To give an "extensional definition" is to point to examples, as adults do when teaching children. The preceding sentence gives an intensional definition of "extensional definition", which makes it an extensional example of "intensional definition".

Why do you feel the need to do this?

Comment by jeronimo196 on Lost Purposes · 2020-04-21T02:25:41.086Z · LW · GW

Necroing:

In cases of delayed effects, linking pensions to outcomes years later might provide incentives.

For instance, teachers pensions could depend on the eventual earnings of their pupils, or elections could have tickboxes for evaluating the last politician.

(which doesn't help the problem of credit being due to many people)

Punishments and rewards should be kept as close to the (un)desired behaviour as possible. Noise to signal ratio should be kept as low as possible. Linking teachers' pensions to their pupils earnings (which depend heavily on things like "the economy") is one of the most demotivating proposals I have ever heard.

As for grading politicians - you either vote for their party, or you don't. Which is the only feedback from the general public they maybe care about. (But why would they - most democracies have a two party system. Get elected, fill your pockets, wait your turn. Maybe wait two terms - big whoop!)

If you want to experience what proper, habit forming, behaviour changing incentive structure feels like, I suggest trying a "free-to-play" mobile game. Clear objectives, daily tasks, randomized rewards - Skinner would be proud. Be warned, the end result is not that different from a casino. But if you don't have an addictive personality, the risk might be worth it. I walked away with the following enlightenment: "All that psychology I've been reading about? It really does work! On me!!!".

Of course, gamefying complex tasks is difficult and in certain cases might be impossible. I think it is seriously worth a try in education.

Comment by jeronimo196 on Dark Side Epistemology · 2020-04-19T22:36:43.084Z · LW · GW

To avoid semantic confusion, here is the Wikipedia definition of qualia: "In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined as individual instances of subjective, conscious experience." https://en.m.wikipedia.org/wiki/Qualia

If I take a digital picture, I can convert the file to BMP format and extract the "red" bits, but this is no evidence that my phone has qualia of redness. An fMRI scanning a brain will have the same problem.

You are skipping the part where we receive confirmation from the patient that he sees the redness. This, combined with the fMRI, should be enough to prove the colour red has been experienced (i.e. processed) by the patient's brain.

Now one question remains - was this a conscious experience? (Thank you for making me clarify this, I missed it in my previous comment!)

I propose that any meaningful phylosophical definition of consciousness related to humans should cover the medical state of consciousness (i.e. the patient follows a light, knows the day of the week, etc.) If it doesn't, I would rather taboo "consciousness" and discuss "the mental process of modeling the environment" instead.

Whatever the definition of consciousness, as long as it relates to the function of a healthy human brain, it entails qualia.

However, if the definition of consciousness doesn't include what's occuring in the human brain, why bother with it?

The idea that everyone has qualia is inductive: I have qualia (I used to call it my "soul"), and I know others have it too since I learned about the word itself from them. I can deduce that maybe all humans have it, but it's doomed to be a "maybe".

I've heard people speaking of a soul before - it did not convince me they (or I) have one. I would happily grant them consciousness instead.

If someone were to invent a test for qualia, perhaps we couldn't even tell if it works properly without solving the hard problem of consciousness.

Even without solving the hard problem of consciousness, as long as we agree that consciousness is a property the human mind has, the test can be administered by a paramedic with a flashlight.

We will need the solution when we try to answer if our phone/dog/A.I. is conscious, though.

(I recently worked out a rudimentary solution (most probably wrong), which relies heavily on Eliezer's writings on the question of free will later in the Sequences. I am reluctant to share it here, since it would spoil Eliezer's solution and he advises people to try working it out for themselves first. I could PM or ROT13 in case of interest.)

Comment by jeronimo196 on Tsuyoku vs. the Egalitarian Instinct · 2020-04-15T11:34:20.735Z · LW · GW

Thank you, that was both funny and relevant.

Comment by jeronimo196 on The Power of Intelligence · 2020-04-03T08:58:07.205Z · LW · GW

Great post. But, squishy as we are, there are two physical activities in which we have dominated the animal kingdom for a long time: throwing stones and long distance running. A silver-back gorilla can tear you apart limb from limb, but have you seen them attempt to throw something? Pityfull! The most they can achieve is flinging. And while intelligence is what started us on the path to developing the strongest long-range attack, the key to endurance hunting is our water cooling system (i.e. sweating). A trained human can literally run at a horse until it drops from heat stroke.

We ran at things and threw rocks at them for a long time. And when we caught things, we broke their bones and skin with more rocks. After awhile, the muscles for our jaws atrophied. Which, combined with the caloric surplus from our successful hunts, allowed our skills and brains to grow. And then - then it was big brains time!

"Born to run" is further read on the topic - it also describes the proper technique of running (forefoot striking instead of heel striking, think rope jumping and stepping quietly). Although, the author does not seem to know math very well and his style might be too obnoxious for some.

TL;DR - intelligence is cool, but sweating should receive some recognition as the other super-power we humans enjoy. Or, as Toph Beifong would put it: " You are a genius. A sweaty, stinky genius."

Comment by jeronimo196 on Extreme Rationality: It's Not That Great · 2020-04-01T19:51:02.056Z · LW · GW

Necroing.

Extreme rationality is for important decisions, not for choosing your breakfast cereal.

Your dietary decisions are supposed to have large and long lasting effects on your health. Take into account the multiple and conflicting opinions on what constitutes a good diet, the difficulty of changing one's mind and habits, and it seems extreme rationality might be just the thing you need for choosing breakfast.

Comment by jeronimo196 on No One Can Exempt You From Rationality's Laws · 2020-03-31T13:42:51.270Z · LW · GW

When the original comment was posted, those chapters of HPMoR were probably not written yet.

Also, spoilers are often written in ROT13 around here. (https://rot13.com/)

Comment by jeronimo196 on Evaluability (And Cheap Holiday Shopping) · 2020-03-22T16:11:10.776Z · LW · GW

Correct. Which is why risk management and diversification are crucial and why you should never bet more than you can afford to lose. I have this as an implicit rule, but I should have mentioned it. Thank you for pointing this out.

Edit: And I should probably read up more on the Gambler's ruin. I can see how expected value maximization doesn't hold up in the extreme cases, but it had to be pointed to me first.

Comment by jeronimo196 on Evaluability (And Cheap Holiday Shopping) · 2020-03-22T14:28:58.522Z · LW · GW

One should factor in the odds of similar games occurring multiple times throughout one's life (unless one is a frequent visitor of casinos). I claim that these are too low for the situation to "add up to normality".

No, one shouldn't. Playing a game of chance once or a thousand times does not influence the outcome of the next round (aka the gambler's fallacy). If a bet is a good idea one time, it's a good idea a thousand times. And if a bet is a good idea a thousand times, it is a good idea the first time. How could you consider betting a thousand times to be a good idea, if you think each individual bet is a bad idea?

Besides that, even if you don't encounter exactly the same odds, you will encounter some odds. The numbers change, the way we make decisions remains the same.

The point of probabilities is not that the expected outcome occurs every time. The point is that it IS the expected outcome. You might never be asked to participate in a betting game at the Mall. But at the end of your life, the sum of all your bets, big and small, will (probably) add up to normality.

Answering the question asked... I could start considering the second choice at 25% chance of 15 (probably properly 16 but my gut feeling has, of course, rounded it) and preferring it at... well, maybe never?

So, to summarize (and simplify a bit) - you would start considering letting go of (almost) certain $2 only after the expected utility is (around) $4.

I am sure you will not be surprised to learn you are not atypical in your preference: "Some studies have suggested that losses are twice as powerful, psychologically, as gains.[1]" (https://en.m.wikipedia.org/wiki/Loss_aversion). You might also be interested in the following articles: https://en.m.wikipedia.org/wiki/Risk_aversion and https://theintactone.com/2018/05/04/bf-u3-topic-5-loss-aversion-gamblers-fallacy/

Now, just because behaviour is common doesn't mean it's wrong. The loss aversion bias is called a bias, because it does lead to missed opportunities.

For example, a combination of loss aversion, risk aversion, status quo bias and lazyness leads to otherwise conscientious people keeping their savings in the bank. For them, any form of investment is deemed too risky, the topic too stress inducing to research or consider. Of course, inaction is also a decision and there is a thing called inflation - but you cannot put a price on your piece of mind./s

I realize the irony of writing about investments in 2020 - but when advising my best friend to buy gold (aka going long on fear) the response was "You know, I do not really believe in such things." I've raised the topic once with each of my friends, the lack of interest is almost universal. Paper just seems so much... safer... if you do not think about it.

and preferring it at... well, maybe never?

Never?!!!

You do realize never includes, but is not limited to, $1000, $100,000, $1,000,000 or, indeed, $3^^^3?

Never!!!!

I hope you won't take offence, but I don't know how else to say it - you have expressed a preference for the (let's say) certainty of winning $2 so extreme, that I find it hard to believe you would stick to it in practice.

Edit: formatting.

Comment by jeronimo196 on Evaporative Cooling of Group Beliefs · 2020-03-22T10:40:13.619Z · LW · GW

Please don't feed the troll. Not even under a post on why banning eloquent trolls might be a bad idea.

Comment by jeronimo196 on The Halo Effect · 2020-03-15T16:36:27.308Z · LW · GW

Nevertheless, it feels just fine to know, that democracy would most probably put something ratlike from the KGB ranks and dungeons into high security cell, and not in the White House.

Necroing, since this is such a stunning display of the Halo effect in action - as if there have never been a single handsome KGB agent, or all "rat-like" people belong in a a cell.

I realize this was an insult hurled at Putin, but it's extremely poorly worded and with some unpleasant implications. Would a good looking KGB agent be preferable? Is Putin that physically repulsive, or is matt33 misjudging his appearance due to the horn effect?

Comment by jeronimo196 on Evaluability (And Cheap Holiday Shopping) · 2020-03-15T15:29:55.425Z · LW · GW

Because I, seeing both things side by side, still prefer 29/36 over 7/36 by the easy reasoning "better win $2 then nothing", and, if I'm only playing once, I claim it is a good strategy

You are not only playing once. If you follow a consistent decision making strategy, your choice should hold every time you face similar odds in the future. Or at least, for as long as you plan to play similar games.

Mathematically, $2 is the same as 50% chance to win $4, is the same as 25% chance to win $8. So every time I am offered $2 or 25% chance to win $9, I should choose the latter. (Most people however are extremely risk adversive - I think the ratio is something like 2:1, which leads to missed opportunities).

This holds for me, until someone offers me a million dollars or 50% chance to win whatever. I can retire on a million, everything above that is imaginary. Or until I am stranded in the mall with no wallet and desperately need $1.50 for a bus ticket. Which is why every investor needs 6 months worth of expenses - you don't want to close positions at a bad time, just so you can pay rent.

With that in mind - and if $9 still seems low - at what price would you choose the second bet?

Comment by jeronimo196 on The Logical Fallacy of Generalization from Fictional Evidence · 2020-03-14T19:59:22.247Z · LW · GW

Bob Merkelthud slid cautiously through the door of the alien spacecraft, glancing right and then left (or left and then right) to see whether any of the dreaded Space Monsters yet remained. At his side was the only weapon that had been found effective against the Space Monsters, a Space Sword forged of pure titanium with 30% probability, an ordinary iron crowbar with 20% probability, and a shimmering black discus found in the smoking ruins of Stonehenge with 45% probability, the remaining 5% being distributed over too many minor outcomes to list here.

I now desperately want to read a choose-your-own-adventure written by EY.

Comment by jeronimo196 on Dark Side Epistemology · 2020-03-08T23:44:14.366Z · LW · GW

Jordan Peterson's redefinition of truth comes to mind. During his first appearance on Sam Harris' podcast, he presented the following: "Nietzsche said that truth is useful (for humanity). Therefore, what is harmful for humanity, cannot be "true". Example - if scientists discover how to create a new plague, that knowledge may be technically correct, but cannot be called "true". On the other hand, the bible is very useful. Like, extremely useful. So very useful, that even if not technically correct, the bible is nevertheless "true"."

Of course, how to judge whether "E=mc^2" is "true" or only correct (before the Apocalypse!) is left to the listener. The important part is being able to say that the bible is "true", everything else is secondary.

Comment by jeronimo196 on Dark Side Epistemology · 2020-03-08T22:34:36.856Z · LW · GW

I don't think the proponents of qualia as metaphysical would agree that such a test is possible in theory - otherwise you could put someone in an MRI scan, show him a red square, monitor for activity in his visual cortex and wait for him to confirm he sees "the redness". This should be enough to conclude some "redness" related experience has occured in the subject's brain (since qualia is supposed to be individual, differences in experience is expected - it doesn't have to be exactly the same). And yet the question of philosophical zombies remains (at least according to some philosophers).

Comment by jeronimo196 on Entangled Truths, Contagious Lies · 2020-03-08T14:32:42.918Z · LW · GW

But compared to outright lies, either honesty or silence involves less exposure to recursively propagating risks you don’t know you’re taking.

Only if you value unblemished reputation over the short term gain provided by the lie. Fooling some of the people some of the time might be sufficient for an unscrupulous agent.

Comment by jeronimo196 on Are Your Enemies Innately Evil? · 2020-03-04T13:48:46.526Z · LW · GW

Necroing, I couldn't help myself:

An accurate estimate of anyone else’s psychology is a dubious benefit in strategic interactions that depend solely on being able to predict the actions of friend and foe.

An accurate estimate of anyone else's psychology should improve your ability to predict their actions.

By parity of reasoning, the rational principle of seeking our own advantage allows us to use our enemies at our pleasure, and treat them as is most convenient for us. For our civic nature is defined by the constitution of our state; and to the extent that foreign subjects do not agree in nature with us, and their affects are different in nature from our affects, we would be ill served by extending our habitual notions of humanity, formed through intercourse with our compatriots, to anyone that does not partake of our social compact.

Beautiful. If I may misquote Tanos -"Using christianity to destroy christianity." I am not sure Spinoza would agree with the notion that foreigners are non-human, though.

Leaving questions of theology and morality aside, the point of the above post is that thinking of your enemies as non-human will intervene with your ability to accurately model their motivations and predict/influence their future behaviour.

Comment by jeronimo196 on Are Your Enemies Innately Evil? · 2020-03-04T12:26:58.867Z · LW · GW

Yep. Young males have engaged in high risk/high reward behaviour for personal glory/the good of the tribe since the dawn of time. One of the socially accepted and encouraged outlets for this behaviour is called being a warrior.

Comment by jeronimo196 on Your Rationality is My Business · 2020-03-02T18:38:36.522Z · LW · GW

If there are any true Relativists or Selfishes, we do not hear them—they remain silent, non-players.

A truly selfish person will still be concerned with PR. Concealing one's selfishness might prove impossible, or at the very least, inconvenient. Far better to convince society it's a virtue. Especially if one is already standing on a heap of utility and does not feel like sharing.

Comment by jeronimo196 on Welcome to Less Wrong! (2012) · 2020-03-02T12:19:28.343Z · LW · GW

No worries :) and no reason to be sorry- the bell is quite obvious on PC, but my android phone only shows it when scrolling. Probably an issue on my side.

Comment by jeronimo196 on Welcome to Less Wrong! (2012) · 2020-02-29T22:22:45.868Z · LW · GW

Thank you! You have no idea how happy your reply makes me! In an irrationally large part, because I've seen your name in a book, but I just cannot help myself. You are alive! (Duh!) More importantly, the lesswrong community is alive! (Double Duh!, but going through the Sequences' comments can be a bit discouraging - like playing the first levels of a MMORPG, while the experienced player base has moved on to level 50.) Hopefully, we'll have many interesting discussions once I catch up. So much to look forward to! Will Alicorn be there? Will TheOtherDave explain what happened to the original Dave? You guys are legends.

P.S. Sorry for the delayed response, I didn't notice the number next to the bell earlier. I'll make sure to check it frequently from now on.

Comment by jeronimo196 on The Proper Use of Humility · 2020-02-28T10:37:38.098Z · LW · GW

This was meant as a joke. Sorry if the intent is not obvious.

Comment by jeronimo196 on How to Convince Me That 2 + 2 = 3 · 2020-02-27T19:03:00.091Z · LW · GW

God showing up and granting all humans Wolverine's healing factor would be evidence he exists. Providing a good explanation of why he permitted disease in the first place might convince me he is not as evil as described in the Bible.

Edit: Aliens playing god would still be far more likely, but the above scenario would be evidence in favour of the god hypothesis.

Comment by jeronimo196 on New Improved Lottery · 2020-02-26T13:29:51.998Z · LW · GW

They actually don't. Glossing over all the details, anyone who bought bitcoin 13 years ago (and just left it alone) received far better return than anyone buying into the proposed lottery would have. Results matter.

Comment by jeronimo196 on Lotteries: A Waste of Hope · 2020-02-26T08:52:51.812Z · LW · GW

Necroing because that seems an often expressed sentiment I oppose strongly to:

May it be to poor education at a younger age, traumatic experiences, simply having the wrong education for the current playfield, being sacked at an older age, having no available finances to support further education, a lack of intelligence or simply spiraling down the road of depression due to a lack of chances or being stuck in a debt one can never recover of in a lifetime... these are all scenario's in which the player on the Lotto actually rationally pays for the soothing dream of a better (financial) future. A future that will not happen if one would not win the lottery.

Was that a fully general excuse for stupid behaviour on behalf of poor people, or are you just happy to meet me?

I wonder, would this be used for gambling only, or could it also cover drug abuse, alcoholism, crime?

Spare me the pity party. Reality doesn't care about sob stories - it doesn't matter how much in debt you are, how unfortunate your circumstances and how great your need - you are not going to win the lottery. Shielding people from this fact is not doing them any favours.

Edited: the first sentence was needlessly rude and confrontational, also spelling.

Comment by jeronimo196 on Tsuyoku vs. the Egalitarian Instinct · 2020-02-24T09:06:47.149Z · LW · GW

Time for some necroing. People who suffer from depression are trying to achieve levels of happiness corresponding with reality (maybe not with the express purpose of clearer perception of reality, but still...)

I can imagine a condition causing someone to experience excessive happiness - such person could conceivably want to lower his level of happiness, so he could grieve for the loss of a loved one.

Feelings should be rational - https://www.lesswrong.com/posts/SqF8cHjJv43mvJJzx/feeling-rational

As Carlin said in one of his routines, self-confidence (in relation to achievement) is like the fuel gauge in a car. Turns out, messing with it doesn't actually let you go further (he claimed to base this assertion on studies, so I am sure it's true). Happiness may be similar, serving better as a motivator rather than a terminal value in itself.

In any case, I suspect most people here would not climb in a tub filled with orgasmium.

But if you want to mess with the gauge regardless, I know a stupid method that works: stand with your back straight, shoulders wide, head held high. Smile broadly (showing teeth). Hold this pose for 5 minutes (by an actual clock).

Thinking happy/funny thoughts is optional. Being grateful for the state of (at least relatively) good health and trying to enjoy each breath (you might have a finite amount, after all) are also optional.

With this method, I could be happy during my own funeral. And yet, I am not maintaining MAXIMUM HAPPINESS 24/7. Why? Turns out, constant happiness can be quite boring. Still, the method is not at all useless - sometimes the gauges actually need calibration and I do enjoy the option very much indeed. (And to think some people pay for drugs... What a waste.)

Comment by jeronimo196 on The Proper Use of Humility · 2020-02-23T19:28:26.820Z · LW · GW

Backing up... everything. Deploying changes to test environment before deploying to production. Accepting Murphy's Law unto yourself. Looking twice before crossing the street. Developing a blanket policy of general paranoia. Promoting a blanket policy of general paranoia. Developing alcoholism. Promoting alcoholism. Etc...

Edit: I forgot arguably the most important one: admiting you cannot reliably do better than the market by picking individual stocks (nobody can!) and buying market ETFs instead.

Comment by jeronimo196 on Rationality: An Introduction · 2020-02-23T17:53:14.612Z · LW · GW

What Liliet B said. Low priors will screw with you even after a "definitive" experiment. You might also want to take a look at this: https://www.lesswrong.com/posts/XTXWPQSEgoMkAupKt/an-intuitive-explanation-of-bayes-s-theorem

Comment by jeronimo196 on The Simple Truth · 2020-02-23T15:42:47.115Z · LW · GW

OG Darwin - harsh, but also unfair. Terribly, terribly unfair.

Comment by jeronimo196 on Explain/Worship/Ignore? · 2020-02-23T09:00:31.127Z · LW · GW

"Neither true nor false..." Not so. We gather such stories and treasure them. But at the end of the day, we label them fiction (or mythology, if some portion of humanity believed them to be true at some point) and know better than to go looking for Hogwarts. We know fiction is not corresponding with reality, not part of the map, in other words - not true. In every sense that matter, we treat fiction as false.

All that is good and proper - as long as such works don't claim to describe factual events.

Comment by jeronimo196 on Welcome to Less Wrong! (2012) · 2020-02-22T19:21:05.202Z · LW · GW

Hello lesswrong community!

"Who am I?" I am a Network Engineer, who once used to know a bit of math (sadly, not anymore). Male, around 30, works in IT, atheist - I think I'll blend right in.

"How did I discover lesswrong?" Like the vast majority, I discovered lesswrong after reading HPMOR many years ago. It remains my favourite book to this day. HPMOR and the Sequences taught me a lot of new ideas and, more importantly, put what I already knew into a proper perspective. By the time HPMOR was finally finished, I was no longer sure where my worldview happened to coincide with Mr. Yudkowsky, and where it was shaped by him entirely. This might be due to me learning something new, or a mixture of wishful thinking, hindsight bias and the illusion of transparency, I don't know. I know this - HPMOR nudged me from nihilism to the much rosier and downright cuddly worldview of optimistic nihilism, for which I will be (come on singularity, come on singularity!) eternally grateful.

"When did I became a rationalist?" I like to think of my self as rational in my day-to-day, but I would not describe myself as a rationalist - by the same logic that says a white belt doesn't get to assume the title of master for showing up. Or have I mixed those up and "rational" is the far loftier description?

"Future plans?" I am now making a second flyby over the Sequences, this time with comments. I have a few ideas for posts that might be useful to someone and a 90% complete plotline for an HPMOR sequel (Eliezer, you magnificent bastard, did you have to tease a Prologue?!!!).

Looking forward to meeting some of you (or anyone, really) in the comments and may we all survive this planet together.