Posts
Comments
It seems to me that knowing only a little (and/or being bad at applied math) is kind of a pre-requisite for the level of enthusiasm involved in the use of it as a movement name. It's exciting to see all those bits of evidence and see yourself one-upping all those classy educated people that are dead set against use of those bits of evidence, or who even seen to use them in the completely wrong way. It's even more fun to do that with friends.
You know about little math, and it makes a huge difference to everything, that's exciting.
Or you spent years studying and/or working and all that math almost never matters - almost any evidence that's not overwhelmingly strong is extremely confounded with what's already been considered and/or with the chain of events bringing something to your attention.
Same here. The reason I think so low of the self proclaimed Bayesianism is the sort of thinking where someone sees someone ugly accused and they're like, ha, I am going to be more rational than everyone else today, by ticking my estimate of the guilt up because they're ugly. Completely ignorant that it even makes a difference to the way you should apply Bayes rule that the police and the witnesses and the like had already picked the suspect with this sort of prejudice.
I mentioned duplication. That in 3^^^3 people, most have to be exact duplicates of one another birth to death.
In your extinction example, once you have substantially more than the breeding population, extra people duplicate some aspects of your population (ability to breed) which causes you to find it less bad.
The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.
Not every non-linear relationship can be thwacked with bigger and bigger numbers...
But does beauty influence our judgement in accordance with the correlation, or disproportionally so? It may be for example that ugly people are 10% more likely to commit crimes, 200% more likely to be villains in the movies, and 100% more likely to get flagged as suspects by the prosecutor, or get other massive penalty before you even think consciously about it.
Okay, let's go with your number... let's suppose hypothetically that you aren't beating or otherwise unduly coercing cute girls into saying what you want, and you started with the probability of 2.5%. Then your suspect tells you they were at the house covering their ears not to hear the screams as their big black boss murdered the victim. Now what happens to 2.5%? After you clear the big black boss, what happens?
I don't think you can claim base rate neglect without also claiming police brutality, coercion, and leading the witness (which would be a much bigger problem)
I think it'd be quite strange to claim that confessions don't ever correlate with guilt.
By the way what she did was she claimed she was at the scene of the crime covering her ears as Lumumba murdered Kercher (and no she didn't call the 112 about it or anything). If she as she says was coerced into making such a statement, yeah, that's not evidence of guilt. But if it is as police says it is, do you still think it's not evidence of guilt?
Picture an alternative universe. Bob, an exchange student from Australia, is being questioned as a witness. There's a minor discrepancy: Jake, his friend, withdrew his alibi for the night. Those things happen, you don't really think too much of it, but you have to question Bob. You're somewhat suspicious but not highly so. Without much of a prompt, Bob tells a story of how he was covering his ears as Peter, his boss, was murdering the victim.
Now what do you do with Bob, exactly? Let him go once you clear Peter? Keep him because he's not a cute girl?
Now, we aren't sure that this is how it went. Police claims that this is how it went and Knox claims that she got pretty much beaten into that statement, and it's one word against the other.
Her being psychopathic would have likely lead to other facts that a well funded persecution could uncover.
She's a foreigner, there's no budget for transatlantic flights to figure out if she had been cruel to animals as a child or the like, there's no jurisdiction, and you can't use that sort of stuff in a court anyway.
Well it's a fairly specific type of breaking down, to be accusing other people. There's other ways of breaking down, you know. And if her account of interrogation is false, and the police's account is true, that goes well beyond the lie about slapping. She said she was at the scene of the crime covering her ears as black owner of the bar she works at was murdering the victim, and if you know you didn't coerce the witness into making such a statement, that's very different from coercing a witness into such a statement.
While perhaps insufficient evidence in the court of law, the prosecutor is not the court of law, the prosecutor merely needs a strong suspicion for it to be their job to try to convict.
Ultimately we have Knox's words against the police's, and both sides have a coherent story that makes either side right.
Well, for what it's worth their wounds and bruises guy didn't think it was a single killer. And when someone's murdered at their own place in the dead middle of the night, often the cohabitants are involved.
An interesting piece of easily quantifiable Bayesian evidence could be phones being switched off overnight (dropping off the network) - how often did Knox do that? If she only did that once in many hundred days, on that night in particular, then that could be a very huge amount of evidence. Or she may have done that few times a week, in which case it's irrelevant.
I think you skip some details. Sollecito withdrew his alibi for Knox. Then Knox implicated Lumumba. And they really go after the guy. Interestingly they fail to railroad Lumumba in the way in which you think they railroaded Knox. Which to me is really interesting because it doesn't fit the 'evil police' story.
Knox of course claims it was extremely coercive, took hours, and some physical abuse from the police. Police denies abuse. We can't really tell either way, but prosecution ought to know how coercive they were. So that's another opportunity to really piss prosecution off.
edit: another thing, wounds and bruises on the body were interpreted as Kercher having been held by one person and stabbed by another. This is the reason why prosecution got so completely sure that more than one person was involved. Yeah, it's rather subjective and unreliable but people can be very sure in that sort of stuff.
There's all sorts of complicated details that are completely missing from the US coverage of the trials, which make the prosecution's position much more understandable. Perhaps the prosecution did not have sufficient evidence, but neither did the prosecution come up with some batshit insane theory out of the blue for no reason when they had everything explained with Guede.
edit: also, Guede was not some random robber, he knew people downstairs and met Knox before at least briefly. If he was random robber who never set his foot on the premises, then Bayesian wise it would have been a no-brainer: it's just unlikely that two independent groups of people who had no chance to pick eachother would be on board with murdering.
The major US media often got minor details wrong (especially details having to do with how the Italian legal system works)
Claiming that Guede implicated Sollecito and Knox as a part of a plea bargain and got his sentence cut down for that sounds quite major to me.
Likewise there's a major disagreement with regards to the interrogation where Knox implicated Lumumba (whom the police later cleared, by the way, the same bad police); Knox claims it was after many hours long interrogation and she was literally hit on the head by some policeperson, police says she did this right away and denies brutality.
How the fuck is it a clear cut question that an American girl got hit by Italian police, on basis of her words alone? There's nothing clear about allegations like this.
Precisely. It's also implying that atheists are moral nihilists. Which is BS. Plenty of religious people believe in god who will grant them passage to heaven irrespective of their moral conduct just as long as they repent and accept Jesus; and a plenty of atheists are not moral nihilists.
What I'm saying is that in the context of having religious extremists do all sorts of raping and murdering (of nonbelievers), advancing a pro-religion argument with this sort of thought experiment is really stupid.
Then there's the usual sentiment that the belief in God keeps people from raping and murdering, and it is just empirically false. You can even believe in God and be a total moral nihilist all the same (accept the Jesus and go to heaven no matter what).
I think it's worth reading this if you think it's some variety of a clear cut case.
Prosecutors may also be less likely to accuse women. I wonder what is the female rate of being accused of murder - if it is 1/10 just as the murder rate is, then this 1/10 can cancel out in the courtroom.
The prosecutor is already using what ever priors they wish, including racist and sexist priors, when they select the suspects to bring to the court; if the court is to do the same, they'll be double-counting.
Ultimately it's all in the wash once you start accounting for things like her trying to frame Lumumba.
Keep in mind also that there's evidence available to prosecution but unavailable to you. Knox claiming that she got slapped during interrogation, and other claims that those present at the interrogation know for certain to be true or not.
I can see it going either way: if I were the police present at the interrogation and then I see her completely lying about how interrogation went, then the reference class is not cute girls it's psychopaths and not very smart ones either. On the other hand maybe she didn't lie about the interrogation. I can't know but those present at the interrogation would know.
edit: also the thing is that a lot of the physical evidence was not reported on by the US media.
Basically there is a lot of physical evidence that if valid would massively overpower any "cute girl" priors. So the question is not about those priors but about the possible alternative explanations for said evidence and said evidence's validity.
Well, a common case of people seeing their family get raped and murdered is occurring right now (ISIS related shit) and the raping is done by religious extremists, so...
I think it's interesting to note the lack of significant correlation between either IQ or calibration(as a proxy for rationality and/or sanity) and various beliefs such as many worlds. It's a common sentiment here that beliefs are a gauge of intelligence and rationality, but that doesn't seem to be true.
It would be interesting to include a small set of IQ test like questions, to confirm that there is a huge correlation between IQ and correct answers in general.
Well, in my view, some details of implementation of a computation are totally indiscernible 'from the inside' and thus make no difference to the subjective experiences, qualia, and the like.
I definitely don't care if there's 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I'm not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?
Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they're 'important', there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.
yeah, clicked wrong button.
Well I'm not sure what's the point then. What you're trying to induct from it.
Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there's only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).
I don't think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn't feel any stronger because there's more 'copies' of it running in perfect unison, it can't even tell the difference. It won't affect the subjective experience if the CPUs running the same computation are slightly physically different).
edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks.
Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it's going to keep growing without a limit, but that's simply not true.
don't know the exact values of N and T
For one thing N=1 T=1 trivially satisfies your condition...
I'm not sure what you mean by this.
I mean, suppose that you got yourself a function that takes in a description of what's going on in a region of spacetime, and it spits out a real number of how bad it is.
Now, that function can do all sorts of perfectly reasonable things that could make it asymptotic for large numbers of people, for example it could be counting distinct subjective experiences in there (otherwise a mind upload on very multiple redundant hardware is an utility monster, despite having an identical subjective experience to same upload running one time. That's much sillier than the usual utility monster, which feels much stronger feelings). This would impose a finite limit (for brains of finite complexity).
One thing that function can't do, is to have a general property that f(a union b)=f(a)+f(b) , because then we just subdivide our space into individual atoms none of which are feeling anything.
Now, do you have any actual argument as to why the 'badness' function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?
I don't think you do. This is why this stuff strikes me as pseudomath. You don't even state your premises let alone justify them.
That strikes me as a deliberate set up for a continuum fallacy.
Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?
I'd much prefer to have a [large number of exact copies of me] experience 1 second of headache than for one me to suffer it for a whole day. Because those copies they don't have any mechanism which could compound their suffering. They aren't even different subjectivities. I don't see any reason why a hypothetical mind upload of me running on multiple redundant hardware should be an utility monster, if it can't even tell subjectively how redundant it's hardware is.
Some anaesthetics do something similar, preventing any new long term memories, people have no problem with taking those for surgery. Something's still experiencing pain but it's not compounding into anything really bad (unless the drugs fail to work, or unless some form of long term memory still works). A real example of a very strong preference for N independent experiences of 30 seconds of pain over 1 experience of 30*N seconds of pain.
Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn't make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close.
If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.
I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would've gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it's merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.
Well, my point was that you can't expect the same rate of advances from some IQ breeding programme that we get when breeding traits arising via loss-of-function mutations.
They seem to be replicating.
They don't seem to be replicating very well...
Sure, there's a huge genetic component, but almost none of it is "easily identified".
Generally you can expect that parameters such as e.g. initial receptor density at a specific kind of synapse would be influenced by multiple genes and have an optimum, where either higher or lower value is sub-optimal. So you can easily get one of the shapes from the bottom row in
http://en.wikipedia.org/wiki/Correlation_and_dependence#/media/File:Correlation_examples2.svg
i.e. little or no correlation between IQ and that parameter (and little or no correlation between IQ and any one of the many genes influencing said parameter).
edit: that is to say, for example if we have an allele which slightly increases number of receptors on a synapse between some neuron type A and some neuron type B, that can either increase or decrease the intelligence depending on whenever the activation of Bs by As would be too low or too high otherwise (as determined by all the other genes). So this allele affects intelligence, sure, but not in a simple easy to detect way.
Well, mostly everyone heard of Xenu, for some value of "heard of", so I'm not sure what's your point.
So the poll would still be very useful for demonstrating that the Basilisk is a highly non-central and peripheral topic.
Yeah. So far, though, it is so highly non central and so peripheral that you can't even add a poll question about it.
edit:
(At this point, isn't it literally exactly one person, Eliezer?)
Roko, someone claimed to have had nightmares about it... who knows if they still believe, and whoever else believes? Scientology is far older (and far bigger), there been a lot of insider leaks which is where we know the juicy stuff from.
As for how many people believe in "Basilisk", given various "hint hint there's a much more valid version out there but I won't tell it to you" type statements and repeat objections along the lines of "that's not a fair description of the Basilisk, it makes a lot more sense than you make it out to be", it's a bit slippery with regards to what we mean by Basilisk.
Before one could even consider an utility of a human (or a nematode) 's existence
No. Utility is a thing agents have.
'one' in that case refers to an agent who's trying to value feelings that physical systems have.
I think there's some linguistic confusion here. As an agent valuing that there's no enormous torture camp set up in a region of space, I'd need to have an utility function over space, which gives the utility of that space.
Well, presumably one who's joining a doomsday cult is most worried about the doomsday (and would be relieved if it was just a bullshit doomsday cult). So wouldn't that be a case of jokes minimizing the situation as it exists in the speaker's mind? The reason that NORAD joke of yours is funny to either of us, is that we both believe it can actually cause an extreme catastrophe, which is uncomfortable for us. Why wouldn't a similar joke referencing a false doomsday not be funny to one who believes in said false belief as strongly as we believe in nuclear weapons?
Why the ellipsis?
To indicate that a part was omitted.
Well, a doomsday cult is not only a doomsday cult but also kinda looks enough like a doomsday cult, too. Of people joining something that kinda looks enough like a doomsday cult, some are joining an actual doomsday cult. Those people, do they, in your model, know that they're joining a doomsday cult, so they can avoid joking about it?
If people are scared that they're doing something potentially life-ruining
...
I'd expect the number of people who joined doomsday cults and made jokes like Alicorn's to be approximately zero.
I would be very surprised if this was true. My experience mirrors what Jiro said - people tend to joke about things that scare them. Of course, some would clam up (keep in mind that a clammed up individual may have joked about it before and the joke was not well received, or may be better able to evaluate the lack of humour in such jokes)
Well, you start with a set containing google, mcdonalds, and all other organizations one could be joining, inclusive of all doomsday cults, and then you end up with a much smaller set of organizations, inclusive of all doomsday cults. Which ought to boost the probability of them joining an actual doomsday cult, even if said probability would arguably remain below 0.5 or 0.9 or what ever threshold of credence.
That trades on information, even if you don't know it, that the speaker expects you to know. The speaker believes not only that they're not joining a cult but that it's obvious they're not, or at most clear after a moment's thought; otherwise it wouldn't be funny.
Well, if the speaker got a job at Google or McDonalds, it would be far more obvious that they're not joining a doomsday cult... yet it seems to me that they wouldn't be joking it's a doomsday cult out of the blue then. It's when it is a probable doomsday cult that you try to argue it isn't by hoping that others laugh along with you.
Well, if someone ironically says that they are "dropping out of school to join a doomsday cult" (and they are actually dropping out of school to join something), they got to be joining something that has something to do with a doomsday, rather than, say, another school, or a normal job, or the like.
Well, if someone literally said "I am joining a very cult-like group that I don't consider to be a cult", wouldn't it be much more likely that they are in fact joining a cult than the baseline probability of such? (Which is very low - very small fraction of people are at any moment literally in the process of joining a cult).
It's that this ironic statement acknowledges that the group is very much like a cult or is described as a cult and what they're doing is very much like what a person joining a cult does, but for some reason they don't believe it to be a cult.
"I joined a cult!" [light, smiling]
Well, context matters a lot - if someone has dropped out of school, moved to a foreign country, there's a lot of nonjoke content here. I mean, should we consider the "dropping out of school" to be a joke too?
Well, how should a rational person update their probability of you joining a cult if you said you did?
Yeah. My point is, though, is that it's about relative probability of such remark between those joining a doomsday cult and those not joining a doomsday cult (who are unlikely to at all pull that utterance out of the space of possible utterances, let alone say it)
Well, the way I would put it, someone who's getting a job at McDonalds is exceedingly unlikely to say out of the blue that they're joining a doomsday cult, while someone who's joining a doomsday cult is pretty likely to get told that they're joining a doomsday cult, at one point or the other (or to anticipate such a remark), and thus doesn't have to be uttering something irrelevant out of the blue.
The reason that saying "I'm dropping out of school to join a doomsday cult" works is that people who are really joining a doomsday cult wouldn't say that.
People who are not joining a doomsday cult wouldn't say that either.
Also, intuitively I want to be able to use anthropic reasoning to say "there is only a tiny chance that the universe would have condition X, but I'm not surprised by X because without X observers such as us won't exist"
Hmm, that's an interesting angle on the issue, I didn't quite realize that was the motivation here.
I would be surprised by our existence if that was the case, and not further surprised by observation of X (because I already observed X by the way of perceiving my existence).
Let's say I remember that there was an strange, surprising sign painted on the wall, and I go by the wall, and I see that sign, and I am surprised that there's that sign on the wall at all, but I am not surprised that I am seeing it (because I can perform an operation in my head that implies existence of the sign - my memory tells me I seen it before). Same with the existence, I am surprised we exist at all but I am not surprised when I observe something necessary for my existence because I could've derived it from prior observations.
Well, being alive would surprise me, but not the colour of the ball. Essentially what happens is that the internal senses (e.g. perceiving own internal monologue) end up sensing the ball colour (by the way of the high explosive).
I don't think it can be closed. I mean, when one derives that level of heroism smugness from something as little as a few lightbulbs... a lot of people add a lot of lights just because they like it brighter. Which is ultimately what it boils down to if you go with qualitative 'more light is better for mood'.
It was meant to be humorous. As in, with that sort of thinking, he's lucky the flu is common enough that he'll get a vaccine and won't get the flu. Though I was thinking of things like measles and other anti-vaxxer fodder where precisely because of the use of the vaccine, disease risk seems very low, and it might even be in some instances the case that an agent that's considering a very narrow scope of consequences wouldn't vaccinate.
Another problem is that vaccines are most advantageous when everyone who can be vaccinated is vaccinated, but at that point it is selfishly better for everyone individually not to vaccinate. Since we don't have identical or even similar source code, you can't solve this by playing with the notion of consequence and pretending that most people's decision will track yours, you have to group together and implement a policy applying to everyone.
Also by the way some governments seem to under-vaccinate (possibly for the same selfishness reasons) and it's best to follow WHO recommendations. E.g. in my country they don't vaccinate little kids for rotavirus, which is a condition that not only hurts the child but is so annoying for the parents that vaccinations got great pay-off - while on an individual level the pay at job may not be lower, at the country level nobody's going to compensate for the productivity decrease from dealing with a sick child. And almost everyone gets rotavirus. More than once. And, generally if you earn more than average wage you're probably interested in more vaccinations.
having an insurance policy increases the expected value of receiving a flu shot, as many insurance companies will completely cover the cost of receiving a flu shot.
That, and a little pondering why, is all you ever needed to know.
Actually estimating the utility of a vaccine is very difficult for individuals who are not complete shut-ins interacting with nobody (but then those people won't get sick in the first place), or individuals who aren't complete psychopaths without a job or with a job unusually resistant to damage from absence of coworkers (because even a psychopath would generally not want to get the co-workers sick). This is because a vaccine massively affects the ability of the virus to spread to other people, and in the beginning phase of the epidemic one person infects more than other person. It matters a great deal to an insurance company (or a government), so they can invest a lot of man-hours into modelling of the spread of the virus.
It is fortunate that the cost of the vaccine is so low and the illness is so common you arrive at the correct decision regardless.
Well, let's consider, say, electrical generation plants that convert coal into electricity, in an isolated country. It is absolutely normal and there's nothing whatsoever mysterious that some fraction of the generating capacity would generally go unused. It's when you start abstracting out the generation capacity as a "good" traded on the market, that it becomes mysterious why it would be "unsold".
If you look at jobs, we have extremely severe discrimination based on origin (needs of citizens absolutely trump in almost all circumstances the needs of foreigners) on top of intrinsically very high cost of moving around or learning a foreign language, or learning a different skill, and regions are thus largely isolated. The labour is essentially a non moveable resource. If you have farmers struck in Antarctica, they will never be able to compete with farmers somewhere less hostile to farming, and they'll be unemployed and actively prevented from working anywhere else (because that sounds like it might drop the wages of the workers in those other regions). And they will be unable to price-cut anyone, because the fertilizer, fuel, and so on still costs the same for these guys, and their produce will cost more than anyone else's produce.
Yes, that's why I said it was a bit self contradictory. The point is, you got to have two confidence levels involved that aren't consistent with each other one being lower than the other.
Well said. The way I put it, the hero jumps into the cockpit and lands the plane in storm without once asking if there's a certified pilot on board. It is "Heroic Responsibility" because it isn't responsible without qualifiers. Nor is it heroic, it's just a glitch due to the expected amount of getting laid times your primate brain not knowing about birth control times tiny probability of landing the plane yielding >1 surviving copy of your genes. Or, likely, a much cruder calculation, where the impressiveness appears to be greater than the chance of success seem small, on a background of severe miscalibration due to living in a well tuned society.
To say that you're underconfident is to say that you believe you're correct more often than you believe yourself to be correct. The claim of underconfidence is not a claim underconfident people tend to make. Underconfident people usually don't muster enough confidence about their tendency to be right to conclude that they're underconfident.