Posts

Being a Realist (even if you believe in God) 2012-05-17T14:07:44.053Z · score: 4 (31 votes)

Comments

Comment by scav on Cognitive Biases due to a Narcissistic Parent, Illustrated by HPMOR Quotations · 2014-06-02T09:46:14.311Z · score: 3 (3 votes) · LW · GW

Hmm. Would I be wildly wrong in describing Mrs Bennett (Elizabeth's mother) as a terrible narcissist though? In which case Elizabeth should be more likely to be a narcissist herself, or a people-pleaser? Maybe she got lucky, because she's hardly either. Although her sisters, well...

Good fiction often rings true to real life, but it's no more than a bit of fun to analyse it as though it were a case study of something that actually happened. Still, I'm not against fun. I bet it was fun for Jane Austen to write the character of Mr Collins. Let's see your science explain him ;)

Comment by scav on [deleted post] 2014-04-03T14:20:23.509Z

For me it was the least plausible part. I think if the major obstacle to living where you want is the hassle of carting all your stuff around, the most efficient answer surely isn't living in a shipping crate with special content-bracing furniture.

Makes more sense to me to just not bother with "owning" a lot of matter. If every kind of material object you need is available anywhere, all you need to bring with you when you move house is your information (books, music, family pictures, decor configuration for your living space). There's no particular reason for that to exist in a physical form.

And if you are serious about making a long-term sustainably growing economy, you have to have most of that growth be information (knowledge, art) rather than ever-growing consumption of hard-limited resources.

Still trying to decide whether it would be more painful to learn macroeconomics than experiment with BDSM.

Comment by scav on Rationality Quotes March 2014 · 2014-03-12T16:33:40.388Z · score: 4 (6 votes) · LW · GW

"I have frequently detected myself in such kind of mistakes," said Elinor, "in a total misapprehension of character in some point or other: fancying people so much more gay or grave, or ingenious or stupid than they really are, and I can hardly tell why or in what the deception originated. Sometimes one is guided by what they say of themselves, and very frequently by what other people say of them, without giving oneself time to deliberate and judge."

Jane Austen, Sense and Sensibility

Comment by scav on Lifestyle interventions to increase longevity · 2014-03-10T13:36:17.304Z · score: 3 (3 votes) · LW · GW

Also, nobody knows whether people currently being cryonically preserved by current methods can ever be thawed and healed or uploaded into an emulator. It would suck to die and get frozen a year before they realise they were doing it all wrong.

Comment by scav on White Lies · 2014-02-11T09:34:01.676Z · score: 0 (0 votes) · LW · GW

It's automatically hazardous to give someone a false map of the world. If you do it knowingly you have the responsibility to make sure no harm comes of it. Even if you take that responsibility seriously, and are competent to do so, taking it secretly without consent is an ethical problem.

My take on this:

  • Few people take that responsibility seriously or are competent to do so, or are even aware that it exists.
  • Most of the time people's intuitions about minor well-intended deceptions are sufficient to avoid trouble.
  • If you call someone a liar, that has a strong negative connotation and social implications for good reason. We didn't evolve the capacity for deception primarily to hold surprise birthday parties for each other.

There are no dirty words, but there are inaccurate ones. Use with care.

Comment by scav on White Lies · 2014-02-11T09:20:01.791Z · score: 0 (0 votes) · LW · GW

Which is why I said it was kind. It's still not necessarily a reasonable expectation.

Anyway, the hypothetical preference to be lied to is a bit suspicious, epistemologically. Let's distinguish it from a preference to never hear of anything you don't like, which is on its face unrealistic.

How would you experience getting your preference to be lied to without thereby knowing the unpleasant truth that you wanted to avoid? You want to know but you want to pretend the other person doesn't know that you know? It's a bit crazy.

How would you safely determine that someone prefers to be lied to, without exposing them to the truth they might not want? This isn't trivial: if you lie to someone who doesn't prefer it, I hope we can agree that's worse than the other way round.

Comment by scav on White Lies · 2014-02-09T16:31:08.362Z · score: 0 (0 votes) · LW · GW

That's kind. But not all our preferences are reasonable expectations.

Anyway, maybe I weight things differently or it was a very short sucky play, but the downsides are still pretty compelling.

Comment by scav on White Lies · 2014-02-09T16:28:01.707Z · score: 1 (1 votes) · LW · GW

It's a dodgy metaphor at best anyway, but 'point' taken. :)

Comment by scav on White Lies · 2014-02-08T17:38:32.317Z · score: 2 (2 votes) · LW · GW

The breakup was a good thing for other reasons, but I still regret not lying to her about what I thought of the play.

Why? Best case scenario is she keeps taking you to unenjoyable plays until you find you have to end the relationship yourself anyway or finally tell her the truth. Out of all the things in a relationship whose end was "a good thing for other reasons", one argument about whether a play was any good seems like a trivial thing to regret.

I can't favour lies as such. I am however on board with people honestly communicating the connotation that they care how you feel at the expense of the denotational literal meaning of their words.

In lies, the intention is not to soften but to deceive. So I don't even like the phrase "white lie". It's like, if you're going to stab me in the back, is it better if it's with a white knife?

Comment by scav on Rationality Quotes February 2014 · 2014-02-04T12:55:19.870Z · score: -1 (3 votes) · LW · GW

Voted down because its connection to rationality is so obscure I have to take it at face value, and at face value it appears to be factually incorrect in several ways. IOW, BS.

Comment by scav on Doubt, Science, and Magical Creatures - a Child's Perspective · 2014-01-07T11:56:10.565Z · score: 5 (5 votes) · LW · GW

I'm not going to criticise your decision, especially with regard to the social situation at school, which I can't speculate about. But I doubt it's more interesting to believe in the weird collection of junk memes that Santa Claus has become.

Maybe it's just me, but I think the truth is always more interesting, because there's aways more detail in it. Fake things are ultimately very boring; you poke at them a bit and there's nothing there. Flying reindeer are just pictures of approximately deer-like animals (usually more like red deer) positioned above the ground. Real reindeer are pretty amazing.

Comment by scav on Luck I: Finding White Swans · 2013-12-16T15:33:33.853Z · score: 0 (0 votes) · LW · GW

Congratulations - now you are less wrong about that ;)

Comment by scav on The Limits of Intelligence and Me: Domain Expertise · 2013-12-10T16:58:08.934Z · score: 0 (0 votes) · LW · GW

As to the teacher, yeah that sounds plausible. If Chris wants to satisfy our curiosity he can expand a little on how that conversation went. In my experience, teachers can really be dicks about that kind of thing.

AFAIK, integers (including negative integers) occur in nature (e.g. electrical charge) as do complex numbers. Our everyday experience isn't an objective measure of how natural things are, because we know less than John Snow about nearly everything.

I'd bet any aliens who get here know more than us about the phenomena we currently describe using general relativity and quantum mechanics. If they do all that without negative or complex numbers I'll be hugely surprised. But then I'd be super surprised they got here at all :)

Comment by scav on The Limits of Intelligence and Me: Domain Expertise · 2013-12-09T14:05:06.642Z · score: 1 (1 votes) · LW · GW

I expect the math teacher wasn't making any kind of philosophical argument such as "do any numbers exist, and if so in what sense?" There is a different connotation, for my idiolect anyway, between "no such thing as X" and "X does not exist".

It's possible that the only numbers that exist are the complex numbers, and that more familiar subsets such as the hilariously named "real" and "natural" numbers are invented by humans. I appreciate that this story is usually told the other way round.

Comment by scav on Rationality Quotes December 2013 · 2013-12-05T10:54:20.248Z · score: 0 (0 votes) · LW · GW

Yes?

Comment by scav on 2013 Less Wrong Census/Survey · 2013-11-27T12:38:59.856Z · score: 1 (1 votes) · LW · GW

Well, I can't find any use for the word supernatural myself, even in connection with God. It doesn't seem to mean anything. I can imagine discussing God as a hypothetical natural phenomenon that a universe containing sentient life might have, for example, without the s word making any useful contribution.

Maybe anything in mathematics that doesn't correspond to something in physics is supernatural? Octonions perhaps, or the Monster Group. (AFAIK, not being a physicist or mathematician)

Comment by scav on 2013 Less Wrong Census/Survey · 2013-11-26T14:52:26.203Z · score: 2 (2 votes) · LW · GW

Heh. I also didn't care about the $60, and realised that taking the time to work out an optimal strategy would cost more of my time than the expected value of doing so.

So I fell back on a character-ethics heuristic and cooperated. Bounded rationality at work. Whoever wins can thank me later for my sloth.

Comment by scav on 2013 Less Wrong Census/Survey · 2013-11-26T14:46:05.165Z · score: 0 (0 votes) · LW · GW

If everything in your universe is a simulation, then the external implementation of it is at least extra-natural from your point of view, not constrained by any of the simulated natural laws. So you might as well call it supernatural if you like.

If you include all layers of simulation all the way out to base reality as part of the one huge natural system, then everything is natural, even if most of it is unknowable.

Comment by scav on 2013 Less Wrong Census/Survey · 2013-11-26T14:39:45.262Z · score: 19 (19 votes) · LW · GW

Fun as always. Looking back at my answers, I think I'm profoundly irrational, but getting more aware of it. Oh well.

Comment by scav on Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime · 2013-11-08T12:08:34.533Z · score: 2 (2 votes) · LW · GW

True. I suppose I was unconsciously thinking (now there's a phrase to fear!) about improbable dangerous events, where it is much more important not to underestimate P(X). If I get it wrong such that P(X) is truly only one in a trillion, then I am never going to know the difference and it's not a big deal, but if P(X) is truly on the order of P(I suck at maths) then I am in serious trouble ;)

Especially given the recent evidence you have just provided for that hypothesis.

Comment by scav on Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime · 2013-11-07T14:51:25.151Z · score: 4 (6 votes) · LW · GW

I've never been completely happy with the "I could make 1M similar statements and be wrong once" test. It seems, I dunno, kind of a frequentist way of thinking about the probability that I'm wrong. I can't imagine making a million statements and have no way of knowing what it's like to feel confidence about a statement to an accuracy of one part per million.

Other ways to think of tiny probabilities:

(1) If probability theory tells me there's a 1 in a billion chance of X happening, then P(X) is somewhere between 1 in a billion and P(I calculated wrong), the latter being much higher.

If I were running on hardware that was better at arithmetic, P(I calculated wrong) could be got down way below 1 in a billion. After all, even today's computers do billions of arithmetic operations per second. If they had anything like a one-in-a-billion failure rate per operation, we'd find them much less useful.

(2) Think of statements like P(7 is prime) = 1 as useful simplifications. If I am examining whether 7 is prime, I wouldn't start with a prior of 1. But if I'm testing a hypothesis about something else and it depends on (among other things) whether 7 is prime, I wouldn't assign P(7 is prime) some ridiculously specific just-under-1 probability; I'd call it 1 and simplify the causal network accordingly.

Comment by scav on Rationality Quotes November 2013 · 2013-11-05T17:39:36.936Z · score: 2 (2 votes) · LW · GW

From the point of view of your genes, likely to reproduce and beneficial are exactly the same thing. That's trivially true.

Also not particularly interesting even if true: crazy beliefs that get you killed or prevent you from breeding have to spread non-parentally. They don't have to be particularly persuasive or virulent, there just has to be some other mechanism (e.g. state control of education, military discipline, enjoyable but idiotic forms of mass entertainment) to spread them.

The prevalence of these means doesn't even have to depend on the ones spreading the memes adopting them for themselves: there can be an economic motive for those who promote them for others to believe crazy things; like there is for drug dealers to sell rather than use heroin.

Comment by scav on Rationality Quotes November 2013 · 2013-11-05T17:28:24.829Z · score: 4 (4 votes) · LW · GW

I don't think there's any way to discriminate between crazy things your mum believes and crazy things the man on the street corner believes.

I also think the virulence of a meme complex, like the virulence of a virus, is very dependent on the context i.e. the population it is introduced to and the other memes it competes with in that population.

"what do you think you know, and how do you think you know it?" is snappy enough to be "virulent" and, I think, not too harmful to the individual host.

Comment by scav on Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques · 2013-10-20T11:10:34.109Z · score: 1 (1 votes) · LW · GW

Oh, OK. I get you. I don't describe myself as a patternist, and I might not be what you mean by it. In any case I am not making the first of those claims.

However, it seems possible to me that a sufficiently close copy of me would think it was me, experience being me, and would maybe even be more similar to me as a person than biological me of five years ago or five years hence.

I do claim that it is theoretically possible to construct such a copy, but I don't think it is at all probable that signing up for cryonics will result in such a copy ever being made.

If I had to give a reason for thinking it's possible in principle, I'd have to say: I am deeply sceptical that there is any need for a "self" to be made of anything other than classical physical processes. I don't think our brains, however complex, require in their physical construction, anything more mysterious than room-temperature chemistry.

The amazing mystery of the informational complexity of our brains is undiminished by believing it to be physically prosaic when you reduce it to its individual components, so it's not like I'm trying to disappear a problem I don't understand by pretending that just saying "chemistry" explains it.

I stand by my scepticism of the self as a single indivisible entity with special properties that are posited only to make it agreeable to someone's intuition, rather than because it best fits the results of experiment. That's really all my post was about: impatience with argument from intuition and argument by hand-waving.

I'll continue to doubt the practicality of cryonics until they freeze a rat and restore it 5 years later to a state where they can tell that it remembers stimuli it was taught before freezing. If that state is a virtual rat running on silicon, that will be interesting too.

Comment by scav on Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques · 2013-10-18T19:20:20.870Z · score: 1 (1 votes) · LW · GW

Just for exercise, let's estimate the probability of the conjunction of my claims.

claim A: I think the idea of a single 'self' in the brain is provably untrue according to currently understood neuroscience. I do honestly think so, therefore P(A) as close to 1.0 as makes no difference. Whether I'm right is another matter.

claim B: I think a wildly speculative vague idea thrown into a discussion and then repeatedly disclaimed does little to clarify anything. P(B) approx 0.998 - I might change my mind before the day is out.

claim C: The thing I claim to think in claim B is in fact "usually" true. P(C) maybe 0.97 because I haven't really thought it through but I reckon a random sample of 20 instances of such would be unlikely to reveal 10 exceptions, defeating the "usually".

claim D: A running virtual machine is a physical process happening in a physical object. P(D) very close to 1, because I have no evidence of non-physical processes, and sticking close to the usual definition of a virtual machine, we definitely have never built and run a non-physical one.

claim E: You too are a physical process happening in a physical object. P(E) also close to 1. Never seen a non-physical person either, and if they exist, how do they type comments on lesswrong?

claim F: Nobody knows enough about the reality of consciousness to make legitimate claims that human minds are not information-processing physical processes. P(F) = 0.99. I'm pretty sure I'd have heard something if that problem had been so conclusively solved, but maybe they were disappeared by the CIA or it was announced last week and I've been busy or something.

P( A B C D E F) is approx 0.96.

The amount of money I'd bet would depend on the odds on offer.

I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.

Comment by scav on Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques · 2013-10-18T16:05:05.235Z · score: 0 (2 votes) · LW · GW

My best bet is that the self is a single physical thing, a specific physical phenomenon, which forms at a definite moment in the life of the organism, persists through time even during unconsciousness, and ceases to exist when its biological matrix becomes inhospitable.

How much do you want to bet on the conjunction of all those claims? (hint: I think at least one of them is provably untrue even according to current knowledge)

That is just a wild speculation, made for the sake of concreteness.

I don't think it supplied the necessary amount of concreteness to be useful; this is usual for wild speculation. ;)

The physical object is the reality, the virtual machine is just a concept.

A running virtual machine is a physical process happening in a physical object. So are you.

This is false to the robust reality of consciousness

Well, nobody actually knows enough about the reality of consciousness to make that claim. It may be that it is incompatible with your intuitions about consciousness. Mine too, so I haven't any alternative claims to make in response.

Comment by scav on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode · 2013-10-11T11:42:45.600Z · score: 6 (6 votes) · LW · GW

I find the conclusion that the US would be better off with some form of proportional representation pretty compelling actually, and I don't think it's so implausible that it would make a positive difference.

The difference it makes in Europe (compared to the UK for example) seems to be that the smaller parties with agendas the median voter doesn't care much about still get a voice in parliament. It's worth it for the Greens or the Pirate party to campaign for another 1% of the vote, because they get another 1% of the seats, instead of nothing.

It should be a better marketplace of ideas; although a few major parties still keep most of the power, they have more incentive to accommodate or adopt new ideas. I suppose the presence of the minor parties increases the visibility of multiple policy axes, forcing the major parties to compete for the median voter along each axis.

Having said that, it still isn't very relevant to the thrust of the post, so the decision to footnote it was probably correct.

Comment by scav on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode · 2013-10-10T10:44:07.231Z · score: 1 (1 votes) · LW · GW

Thanks for identifying Duverger's Law. I had never heard of it, but I had informally grasped its application in UK politics.

Comment by scav on Rationality Quotes October 2013 · 2013-10-08T08:07:12.598Z · score: 2 (4 votes) · LW · GW

Depressing but plausible :(

I suspect "the way they are presented in the popular media" is crafted with that in mind.

Comment by scav on Rationality Quotes October 2013 · 2013-10-07T11:39:36.387Z · score: 6 (6 votes) · LW · GW

Corollary: all organisations eventually contain sub-competent people. Design protocols accordingly.

Comment by scav on Rationality Quotes October 2013 · 2013-10-07T11:36:07.549Z · score: 0 (2 votes) · LW · GW

Citation, or at least a clear example, needed. I can probably construct two policy alternatives, and predict which will be attractive to people who identify with a given political tribe. Then I suppose I get to call one of those options the "stupid" one based on my own value system.

Please tell me that isn't the sort of thing you mean.

I have met people with what I consider to be very irrational political views (in that they are little more than clusters of rote debating points never subjected to analysis). Outside of the well-worn habitual responses their politics would dictate they regurgitate, I have no idea how they would choose on an issue they had never encountered before.

Maybe stupidly (because they aren't in the habit of reflective thought), but maybe less so (because without a knee-jerk political reaction ready to hand, they might take a few seconds to think).

I will go so far as to agree that in too many cases, simple answers will be favoured over complex questions, and instant gratification will be favoured over longer-term advantage.

Comment by scav on The genie knows, but doesn't care · 2013-09-09T19:38:38.416Z · score: 1 (1 votes) · LW · GW

It's still probably premature to guess whether friendliness is provable when we don't have any idea what it is. My worry is not that it wouldn't be possible or provable, but that it might not be a meaningful term at all.

But I also suspect friendliness, if it does mean anything, is in general going to be so complex that "only [needing] to find a single program that provably has behaviour X" may be beyond us. There are lots of mathematical conjectures we can't prove, even without invoking the halting problem.

One terrible trap might be the temptation to make simplifications in the model to make the problem provable, but end up proving the wrong thing. Maybe you can prove that a set of friendliness criteria are stable under self-modification, but I don't see any way to prove those starting criteria don't have terrible unintended consequences. Those are contingent on too many real-world circumstances and unknown unknowns. How do you even model that?

Comment by scav on The genie knows, but doesn't care · 2013-09-09T19:15:21.558Z · score: 0 (0 votes) · LW · GW

That first one would be worth doing even if we didn't dare hand the AI the keys to go and make changes. To study a non-human-created ontology would be fascinating and maybe really useful.

Comment by scav on The genie knows, but doesn't care · 2013-09-09T13:47:33.973Z · score: 2 (2 votes) · LW · GW

First list:

1) Poorly defined terms "human intention" and "sufficient".

2) Possibly under any circumstances whatsoever, if it's anything like other non-trivial software, which always has some bugs.

3) Anything from "you may not notice" to "catastrophic failure resulting in deaths". Claim that failure of software to work as humans intend will "generally fail in a way that is harmful to it's own functioning" is unsupported. E.g. a spreadsheet works fine if the floating point math is off in the 20th bit of the mantissa. The answers will be wrong, but there is nothing about that that the spreadsheet could be expected to care about,

4) Not necessarily. GAI may continue to try to do what it was programmed to do, and only unintentionally destroy a small city in the process :)

Second list:

1) Wrong. The abilities of sufficiently complex systems are a huge space of events humans haven't thought about yet, and so do not yet have preferences about. There is no way to know what their preferences would or should be for many many outcomes.

2) Error as failure to perform the requested action may take precedence over error as failure to anticipate hypothetical objections from some humans to something they hadn't expected. For one thing, it is more clearly defined. We already know human-level intelligences act this way.

3) Asteroids and supervolcanoes are not better than humans at preventing errors. It is perfectly possible for something stupid to be able to kill you. Therefore something with greater cognitive and material resources than you, but still with the capacity to make mistakes can certainly kill you. For example, a government.

4) It is already possible for a very fallible human to make something that is better than humans at detecting certain kinds of errors.

5) No. Unless by dramatic you mean "impossibly perfect, magical and universal".

Comment by scav on Rationality Quotes September 2013 · 2013-09-04T21:33:20.478Z · score: 2 (4 votes) · LW · GW

A little cynical maybe? Politicians don't spend 100% of the time making decisions for purely political reasons. Sometimes they are trying to achieve something, even if broadly speaking the purposes of politics are as you imply.

But of course, most of the people we would prefer to be more rational don't know that's what politics is for, so they aren't hampered by that particular excuse to give up on it. Anyway, they could quite reasonably expect more rational decision making from co-workers, doctors, teachers and others.

I don't think the people making decisions to optimise an outcome are well exemplified by bureaucrats. Try engineers.

Knowing that politics is part of what people do, and that destroying it is impossible, yes I would be trying to improve it, and hope for a more-rational population of participants to reform it. I would treat a claim that the way it is now is eternal and unchangeable as an extraordinary one that's never been true so far. So, good luck with that :)

You aren't seriously suggesting the mean of the sanity distribution hasn't moved a huge amount since the Bible was written? Or even in the last 100 years? I know I'm referring to a "sanity distribution" in an unquantifiable hand-wavy way, but do you doubt that those people who believe in a literalist interpretation of the Bible are now outliers, rather that the huge majority they used to be?

Comment by scav on Rationality Quotes September 2013 · 2013-09-04T11:00:18.462Z · score: 3 (3 votes) · LW · GW

Yet.

And you don't even need a majority of rationalists by headcount. You just need to find and hack the vulnerable parts of your culture and politics where you have a chance of raising people's expectations for rational decision making. Actual widespread ability in rationality skills comes later.

Whenever you feel pessimistic about moving the mean of the sanity distribution, try reading the Bible or the Iliad and see how far we've come already.

Comment by scav on Rationality Quotes September 2013 · 2013-09-04T10:46:09.737Z · score: 1 (1 votes) · LW · GW

That is incoherent at best. Is there any context to the quote that might explain why it is here?

Comment by scav on Post ridiculous munchkin ideas! · 2013-08-13T12:27:20.197Z · score: 1 (1 votes) · LW · GW

Some other reason: I just don't know how EY pronounces "Yudkowsky" -- [jʊd'kaʊski] or [ju:d'kɔvski] or otherwise.

But there is a significant overlap between great names for scientists and words that would be worth a lot in Scrabble if proper nouns were allowed.

Comment by scav on Rationality Quotes June 2013 · 2013-06-05T14:07:08.756Z · score: 0 (0 votes) · LW · GW

Of course, when you are trying to get more of "them" to be "us", it's worth pointing out what "they" are doing wrong. It's not like anyone without brain damage is born and destined to be an "unscientific man" for life.

Comment by scav on The Centre for Applied Rationality: a year later from a (somewhat) outside perspective · 2013-05-31T13:02:53.362Z · score: 0 (0 votes) · LW · GW

Yeah, I can see how hierarchical organisations benefit certain goals and activities. I was speaking specifically about the goal of teaching rationality, in case that wasn't clear from context. You don't need a central authority to control what is being taught so much unless you are teaching irrationality (c.f. Scientology, Roman Catholicism or any political organisation).

You could probably run a million rationality courses a year using just a wiki and a smartphone app. (Left as an exercise for the reader)

Comment by scav on The Centre for Applied Rationality: a year later from a (somewhat) outside perspective · 2013-05-29T08:26:27.942Z · score: 1 (1 votes) · LW · GW

True, and I can't see any benefit from hierarchical organisation. There isn't a central authority of rationality any more than there is one for chemistry or calculus.

But CFAR maybe hasn't scaled to its maximum size yet, and as it approaches it, it will probably become clearer what the ideal size is, and there will be more people with experience in training who can split off another group.

Comment by scav on The Centre for Applied Rationality: a year later from a (somewhat) outside perspective · 2013-05-28T14:17:05.433Z · score: 3 (3 votes) · LW · GW

Goodness knows Microsoft could do with some more rationality, even if they have to come by it illicitly ;)

Seriously though: no, don't trust Skype (or Dropbox, or gmail for that matter) to keep your secrets. However, most communications aren't secret, and discussions about rationality per se probably shouldn't be.

I can only imagine that someone spying on rationality discussions with sinister intent is doing it for really irrational reasons, so the more they hear and understand, the more the problem solves itself.

Comment by scav on The Centre for Applied Rationality: a year later from a (somewhat) outside perspective · 2013-05-28T14:04:36.984Z · score: 0 (0 votes) · LW · GW

As usual it depends on the exponent.

Comment by scav on Post ridiculous munchkin ideas! · 2013-05-17T16:20:27.703Z · score: 0 (0 votes) · LW · GW

It reads like a pretty good scientist name. I have no idea how it sounds ;)

Comment by scav on Rationality Quotes May 2013 · 2013-05-06T11:32:54.560Z · score: 1 (3 votes) · LW · GW

Well at least if you pull numbers out of your arse and then make a decision based explicitly on the assumption that they are valid, the decision is open to rational challenge by showing that the numbers are wrong when more evidence comes in. And who knows, the real numbers may be close enough to vindicate the decision.

If you just pull decisions out of your arse without reference to how they relate to evidence (even hypothetically), you are denying any method of improvement other than random trial and error. And when the real numbers become available, you still don't know anything about how good the original decision was.

Comment by scav on Privileging the Question · 2013-04-30T16:34:25.302Z · score: 0 (6 votes) · LW · GW

Postmodernism or dogshit? ;)

Comment by scav on Can somebody explain this to me?: The computability of the laws of physics and hypercomputation · 2013-04-23T11:42:01.422Z · score: 1 (1 votes) · LW · GW

Well, an infinite memory store or an infinite energy source would have infinite mass. So it would either take up the entire universe and have nowhere external to send its results to or, if it had finite size, it would be inside its own Schwarzchild radius, and there would be no way to send a signal out through its event horizon.

So yeah, I'd call infinite storage or power sources (as politely as possible) "unphysical".

And I don't see why you think the halting problem goes away just because you can't put infinite tape in your Turing machine, or because you use finite state automata instead. You still can't set an upper bound on the size of computation needed to determine whether any algorithm in general will terminate, and I kind of thought the point of the halting problem was that it doesn't only apply to actual Turing machines.

Comment by scav on Rationality Quotes April 2013 · 2013-04-08T09:40:16.885Z · score: 2 (2 votes) · LW · GW

If it's a frequently-occurring observation within the group then yes, there seems to be something wrong. Possibly because things are regularly proposed and acted on without considering fairness until someone has to point it out.

If it hardly ever has to be said, but when pointed out, it is often persuasive, you're probably OK.

Comment by scav on Don't Get Offended · 2013-04-07T21:18:07.554Z · score: 0 (0 votes) · LW · GW

Easier, simpler, still not a great idea, for all the reasons I gave above.

Comment by scav on Don't Get Offended · 2013-04-05T15:28:13.075Z · score: 0 (0 votes) · LW · GW

Agreed and voted up. Of course, you don't get a choice about whether to have an emotion, at the base level.

Not sure "offended" is a primary emotion though. It seems to me (by introspection) to be bundled together with a lot of culture-dependent and habitual behaviours, associations and memes, all of which are sub-optimal for any given situation, and could do with being brought under conscious control before being allowed to influence my actions.