Open Thread: December 2011

post by Tripitaka · 2011-12-01T18:59:02.357Z · LW · GW · Legacy · 81 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.

81 comments

Comments sorted by top scores.

comment by Karmakaiser · 2011-12-01T20:49:23.204Z · LW(p) · GW(p)

I'm not sure if it is "worth saying" but a google search for "Secret Bayesian Man" turned up nothing so I wrote this:

ET Jaynes was a 1000 year old vampire

Inspired Cog Sci, AI and Eliezer.

The Probabilities,

Is something that we'll see.

Given that we have the same priors.

Secret Bayesian Man,

Secret Bayesian Man,

You update your beliefs,

Based on the evidence. (2x)

A grandmaster of Bayesian Statistics

He'll straighten out your bias and Heuristics

You'll be Less Wrong than most

You'll take a "Rational Approach..."

Given that we have the same priors

Secret Bayesian Man

Secret Bayesian Man

You update your beliefs,

Based on the evidence. (2x)

Aumann states we must come to agreement

If we have common knowledge with no secrets

Our posteriors must be the same

Or one of us is to blame

Given that we have the same priors.

I deeply apologize.

Replies from: MixedNuts, None, lessdazed
comment by MixedNuts · 2011-12-01T20:54:45.430Z · LW(p) · GW(p)

Odds (20:1) are you will live to regret having written something so ridiculous.

Replies from: Karmakaiser, Karmakaiser
comment by Karmakaiser · 2011-12-01T20:56:19.764Z · LW(p) · GW(p)

My 9th Grade Teen Titans Fanfic is still on the internet.

Game on.

comment by Karmakaiser · 2013-09-24T03:08:48.207Z · LW(p) · GW(p)

Still not regretting, what do I win?

comment by [deleted] · 2011-12-02T03:59:28.704Z · LW(p) · GW(p)

You'll take a "Rational Approach..."

That was a nice touch.

comment by daenerys · 2011-12-02T02:09:01.673Z · LW(p) · GW(p)

I love the idea of the open thread. So many things I would like to discuss, but that I don't feel confident to actually make discussion posts on. Here's one:

On Accepting Compliments

Something I learned, and taught to all my students is that when you are performing certain things (fire, hoops, bellydancing whatever), people are going to be impressed, and are going to compliment you. Even though YOU know that you are nowhere near as good as Person X, or YOU know that you didn't have a good show, you ALWAYS accept their compliment. Doing otherwise is actually an insult to the person who just made an effort to express their appreciation to you. Anyways, you see new performers NOT following this advice, all the time. And I know why. It's HARD to accept compliments, especially when you don't feel deserving of them. But you have to learn to do it anyway, because it's the right thing to do.

Same idea, said better by somebody else

This is one of those things that's probably a pet peeve of mine because I use to do it myself, but I figured I would share what I was told during my performance days. I've seen this phenomenon a bunch, a performer or presenter gets done, an audience member comes up and says something along the lines of "Great job," the complimented responds with something like:

"Oh I totally screwed up"

"No, I didn't really do anything"

"No, I thought it went awfully"

Invariably there are two things that drive this:

  1. The presenter/performer is so caught up in their own self examination, that they are being hyper critical and sharing it with the complementer.

  2. The presenter/performer is concerned about the appearance of humility.

Both ignore a greater truth in the interaction: Someone has said something nice to you, and you are immediately telling them they are wrong! Even if they don't directly perceive this, it can leave them with a bad taste in their mouth. So what do you do? Say "Thank you," that's it. Leave the self examination stuff where it belongs, in your head. If you are concerned with your ego, accept and expand the compliment: "Thank you; I have to say the audience was really great, you guys asked really great questions."

It's a silly little thing, but it can have a big impact on how you are perceived.

I think this is applicable to all areas of life not just performing. In fact, doing some googling, I found a Life Hack on the subject. Some excerpts:

A compliment is, after all, a kind of gift, and turning down a gift insults the person giving it, suggesting that you don’t value them as highly as they value (soon to be “valued”) you. Alas, diminishing the impact of compliments is a pretty strong reflex for many of us. How can we undo what years of habitual practice has made almost unconscious?

Stop [...] making them work for it: Cut the long stream of “no, it was nothings” and “I just did what I had to dos” and let people give you the compliment. Putting it off until they’ve given it three or four times, each time more insistently, is selfish.

This link actually has sample dialogue, if that helps, but it is bellydance-centric.

Replies from: endoself
comment by endoself · 2011-12-02T16:52:22.216Z · LW(p) · GW(p)

Thank you; I've updated significantly based on this.

comment by JoshuaZ · 2011-12-02T00:12:17.954Z · LW(p) · GW(p)

A common claimed Great Filter issue is that Earth-like planets need a large moon to stabilize their orbit. However, recent research seems to indicate that this view is mistaken. In general, this seems to be part of a general pattern where locations that are conducive to life seem more and more common(for another recent example see here) (although I may have some confirmation bias here?). Do these results force us to conclude that a substantial part of the Great Filter is in front of us?

Replies from: rwallace
comment by rwallace · 2011-12-02T13:58:30.858Z · LW(p) · GW(p)

No. The mainstream expectation has pretty much always been that locations conducive to life would be reasonably common; the results of the last couple of decades don't overturn the expectation, they reinforce it with hard data. The controversy has always been on the biological side: whether going from the proverbial warm little pond to a technological civilization is probable (in which case much of the Great Filter must be in front of us) or improbable (in which case we can't say anything about what's in front of us one way or the other). For what it's worth, I think the evidence is decisively in favor of the latter view.

comment by [deleted] · 2011-12-01T20:22:45.889Z · LW(p) · GW(p)

Is it worth posting a series of videos that makes up a gentle introduction to the basics of game theory from the AI class on LW for people who aren't in the class and aren't very good at math?

This is the one of the videos from the series. There are also a few easy practical exercises and their solutions.

comment by David_Gerard · 2011-12-04T16:10:22.206Z · LW(p) · GW(p)

John Cheese from Cracked.com pulls out another few loops of bloodsoaked intestine and slaps them on the page as a ridiculously popular Internet humour piece: 9 YouTube Videos That Prove Anyone Can Get Sober. I hate watching video, and I sat down and watched the lot. Horrifying and compelling. I've been spending this afternoon reading the original thread. It's really bludgeoning home to me just how much we're robots made of meat and what a hard time the mind has trying to steer the elephant. Fighting akrasia is one thing - how do you fight an addiction with the power of your mind?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-04T17:14:44.548Z · LW(p) · GW(p)

You set up an environment for yourself where the patterns of that addiction are less likely to drive you to do things you don't endorse.

Replies from: David_Gerard
comment by David_Gerard · 2011-12-05T09:06:53.224Z · LW(p) · GW(p)

That's obvious when the cravings aren't eating your brain, but that thread is all about how hard that can be in practice.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-05T14:57:29.702Z · LW(p) · GW(p)

It's not all that obvious, even during the lucid periods. At least, I've never found it so... it's very easy to fall into the trap of "I don't need to fix the roof; it's sunny out" and fail to take advantage of the lucid periods to set up systems that I can rely on in the crazy periods.

But the only way I've ever found that works in the long run is to devote effort during lucid periods to setting up the environment, and then hope that environment gets me through the crazy periods.

Natch, that depends on having lucid periods, and on something in the system being able to tell the difference (if not my own brain, then something else I will either trust during the crazy periods, or that can enforce compliance without my proximal cooperation).

If I can't trust my own brain, and I also can't trust anything else more than my own brain (in the short term), then I'm fucked.

comment by gwern · 2011-12-04T17:56:59.118Z · LW(p) · GW(p)

I was musing on the old joke about anti-Occamian priors or anti-induction: 'why are they sure it's a good idea? Well, it's never worked before.' Obviously this is a bad idea for our kind of universe, but what kind of universe does it work in?

Well, in what sort of universe would every failure of X to appear that time interval make X that much more likely? It sounds a bit vaguely like the hope function but actually sounds more like an urn of balls where you sample without replace: every ball you pull (and discard) without finding X makes you a little more confident that next time will be X. Well, what kind of universe sees its possibilities shrinking every time?

For some reason, entropy came to mind. Our universe moves from low to high entropy, and we use induction. If a universe moved the opposite direction from high to low entropy, would its minds use anti-induction? (Minds seem like they'd be possible, if odd; our minds require local lowering of entropy to operate in an environment of increasing entropy, so why not anti-minds which require local raising of entropy to operate in an environment of decreasing entropy - somewhat analogous to reversible computers expending energy to erase bits.)

I have no idea if this makes any sense. (To go back to the urn model, I was thinking of it as sort of a cellular automaton mental model where every turn the plane shrinks: if you are predicting a glider as opposed to a huge turing machine, as every turn passes and the plane shrinks, the less you would expect to see the turing machine survive and the more you would expect to see a glider show up. Or if we were messing with geometry, it'd be as if we were given a heap of polygons with thousands of sides where every second a side was removed, and predicted a triangle - as the seconds pass, we don't see any triangles, but Real Soon Now... Or to put it another way, as entropy decreases, necessarily fewer and fewer arrangements show up; particular patterns get jettisoned out as entropy shrinks, and so having observed a particular pattern, it's unlikely to sneak back in: if the whole universe freezes into one giant simple pattern, the anti-inductionist mind would be quite right to have expected all but one observations to not repeat. Unlike our universe, where there seem to be ever more arrangements as things settle into thermal noise: if a arrangement shows up we'll be seeing a lot of it around. Hence, we start with simple low entropy predictions and decreases confidence.)

Boxo suggested that anti-induction might be formalizable as the opposite of Solomonoff induction, but I couldn't see how that'd work: if it simply picks the opposite of a maximizing AIXI and minimizes its score, then it's the same thing but with an inverse utility function.

The other thing was putting a different probability distribution over programs, one that increases with length. But while you are forbidden uniform distributions over all the infinite integers, and you can have non-uniform decreasing distributions (like the speed prior or random exponentials), it's not at all obvious what a non-uniform increasing distribution looks like - apparently it doesn't work to say 'infinite-length programs have p=0.5, then infinity-1 have p=0.25, then infinity-2 have p=0.125... then programs of length 1/0 have p=0'.

Replies from: Plasmon, Bongo
comment by Plasmon · 2011-12-08T07:18:32.142Z · LW(p) · GW(p)

I was musing on the old joke about anti-Occamian priors or anti-induction: 'why are they sure it's a good idea? Well, it's never worked before.' Obviously this is a bad idea for our kind of universe, but what kind of universe does it work in?

How can they possibly know/think that 'it' has never worked before? That assumes reliability of memory/data storage devices.

I don't see how these anti-Occamians can ever conclude that data storage is reliable.

If they believe data storage is reliable, they can infer whether or not data storage worked in the past. If it worked, then data storage is probably not reliable now. If it didn't work then it didn't record correct information about the past. In neither case is the data storage reliable.

comment by Bongo · 2011-12-04T18:06:24.475Z · LW(p) · GW(p)

(An increasing probability distribution over the natural numbers is impossible. The sequence (P(1), P(2),...) would have to 1) be increasing 2) contain a nonzero element 3) sum to 1, which is impossible.)

comment by daenerys · 2011-12-22T11:00:21.256Z · LW(p) · GW(p)

Funny comic from bouletcorp: Physics of a Pixelated World

comment by TimS · 2011-12-02T02:50:59.572Z · LW(p) · GW(p)

Cf. Eric Flint, I've always found the idea of bringing technology back in time very interesting. Specifically, I've always wondered what technology I could independently invent and how early I could invent it. Of course, the thought experiment requires me to handwave away lots of concerns (like speaking the local language, not being killed as a heretic/outsider, and finding a patron).

Now, I'm not a scientist, but I think I could invent a steam engine if there was decent metallurgy already. Steam engine: Fill large enclosed container with water, heat water to boiling, steam goes through a tube to turn a crank, voila - useful work. So, 1000s in Europe, maybe?

I'd like to think that I could inspire someone like Descartes to invent calculus. But there's no way I could invent in on my own.

Anyone else ever had similar thoughts?

Replies from: gwern, Vladimir_M, torekp
comment by gwern · 2011-12-02T02:56:50.555Z · LW(p) · GW(p)

Of course; it's a common thought-experiment among geeks, ever since A Connecticut Yankee. There's even a shirt stuffed with technical info in case one ever goes back in time.

(FWIW, I think you'd do better with conceptual stuff like Descartes and gravity, which you can explain to the local savant and work on hammering out the details together; metallurgy is hard, and it's not like there weren't steam engines before the industrial revolution - they were just uselessly weak and expensive. Low cost of labor means machines are too expensive to be worth bothering with.)

Replies from: TimS
comment by TimS · 2011-12-02T03:10:47.141Z · LW(p) · GW(p)

You're probably right, but other than proving the Earth is round (which is not likely to need proving unless I go far back), there's not a lot of useful things I can demonstrate to the savant. And telling the savant about germ theory or suchlike without being able to demonstrate it seems pretty useless to me.

Replies from: gwern
comment by gwern · 2011-12-02T04:45:02.333Z · LW(p) · GW(p)

I've always wondered how much 'implicit' knowledge we can take for granted. For example, the basic idea of randomized trials, while it has early forebears in Arabic stuff in the 1000s or whenever, is easy to explain and justify for any LWer while still being highly novel. As well, germ theory is tied to microscopic life and non-spontaneous generation (would one remember Pasteur's sealed jar experiments or be able to reinvent them?) I was just reading a book on colonial Americans in London when I came across a mention of the discoverer of carbon dioxide; I reflected that I would have been easily able to demonstrate it just with a sealed jar and a flame and a mouse and a plant, but am I atypical in that regard? Would other people, even given years or decades pondering, be able to answer the question 'how the deuce do I show these past people that air isn't "air" but oxygen and carbon dioxide?'

I guess you could answer this question just by surveying people on how they would demonstrate such classic simple results.

comment by Vladimir_M · 2011-12-03T00:12:51.484Z · LW(p) · GW(p)

Now, I'm not a scientist, but I think I could invent a steam engine if there was decent metallurgy already.

No way, unless perhaps you're an amateur craftsman with a dazzling variety of practical skills and an extraordinary talent for improvization. And even if you managed to cobble together something that works, you likely wouldn't be able to put it to any profitable use in the given economic circumstances.

comment by torekp · 2011-12-02T03:04:15.841Z · LW(p) · GW(p)

the thought experiment requires me to handwave away lots of concerns

When you carefully consider the implications of those concerns, you'll find that the "I" quickly loses its content when projected back in time to an earlier era. In short, it's a question calling for a pseudo-proposition in answer.

comment by radical_negative_one · 2011-12-03T21:03:49.412Z · LW(p) · GW(p)

I'm usually a lurker here. I generally spend a little too much time on this site. I'm making a personal resolution to leave the site alone for the rest of the day, whenever i read an article here and find that i have nothing to say about it. Under this policy, i expect that i will be spending less time here, and also that i will be contributing more.

comment by beoShaffer · 2011-12-08T06:11:24.269Z · LW(p) · GW(p)

Is there any easy way to report spam comments?

Replies from: Alicorn
comment by Alicorn · 2011-12-08T19:56:48.878Z · LW(p) · GW(p)

There used to be a "Report" button. I don't see it anymore, but it's possible that it just doesn't appear for me because I'm a mod. If you find spam comments, you're welcome to PM me a link so I can ban it; I can't speak for other mods.

Replies from: beoShaffer
comment by beoShaffer · 2011-12-08T23:36:51.397Z · LW(p) · GW(p)

There is isn't normally a report button. There is one when I'm viewing replies to my own comments via my inbox, but that seems to be the only time its available.

comment by jsalvatier · 2011-12-06T19:30:07.597Z · LW(p) · GW(p)

I'm curious what happened to SarahC. I enjoyed her presence, but I haden't seen her recently and I notice she's deleted her account (http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/3f00). Anyone know what happened?

Replies from: Alicorn
comment by Alicorn · 2011-12-06T20:27:01.143Z · LW(p) · GW(p)

SarahC is alive and well; she deleted her account for personal reasons.

Replies from: jsalvatier
comment by jsalvatier · 2011-12-06T21:44:01.464Z · LW(p) · GW(p)

I am glad she's well. Was this an "evaporative cooling" sort of event (e.g. personal conflicts with other LWers) or unrelated personal reasons? (If you/she don't/doesn't mind my asking)

Replies from: Alicorn, Alicorn
comment by Alicorn · 2011-12-06T22:12:23.426Z · LW(p) · GW(p)

Sarah says:

I just left because I was spending too much time on the site and I said some stuff which might not have looked professional to potential employers.

comment by Alicorn · 2011-12-06T21:51:54.694Z · LW(p) · GW(p)

I'm not sure of the extent to which Sarah wants her reasons publicized; I will ask her next time we talk and point her to this thread.

comment by Normal_Anomaly · 2011-12-03T01:34:58.432Z · LW(p) · GW(p)

I've been wondering for several weeks now how to pronounce 3^^^3.

Replies from: Solvent, Curiouskid, TheOtherDave
comment by Solvent · 2011-12-11T02:02:09.477Z · LW(p) · GW(p)

Three to the to the to the three is how I do it.

comment by Curiouskid · 2011-12-15T04:17:50.719Z · LW(p) · GW(p)

I generally just look at it as a picture, so as to not waste time sub-vocalizing.

comment by TheOtherDave · 2011-12-03T01:41:49.783Z · LW(p) · GW(p)

I generally say "triple-hat" to myself when I read it.

Replies from: pengvado
comment by pengvado · 2011-12-03T05:20:47.323Z · LW(p) · GW(p)

3↑↑↑3. The use of ^ instead is a limitation of ASCII.
Knuth called it the "triple-arrow" operator. I don't see any conjugations necessary, so "three triple-arrow three".
"Three pentated by three" also works.

comment by orthonormal · 2012-01-07T19:59:34.594Z · LW(p) · GW(p)

It might confuse newbies that regular meetups don't appear on the meetup map (only irregular ones), and this can be easily fixed. Is there any reason to leave it as is?

comment by David_Gerard · 2011-12-24T23:12:37.443Z · LW(p) · GW(p)

I'm having a wall-banging philosophical disputation with someone over the word "scientism", but mostly over qualia and p-zombies, here. I ask your help on trying to work out if any commonality is resolvable in between the spittle-flecked screaming at each other. (I would suggest diving in only if you have familiarity with the participants - assume we're all smart, particularly the guy who's wrong.)

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-25T00:04:23.161Z · LW(p) · GW(p)

I feel like I ought to admire your tenacity on that thread.

I don't, actually, but I feel like I ought to.

Anyway... no, I haven't a clue how you might go about resolving whatever commonality might exist there. Then again, I've never been able to successfully make contact across the qualia gulf, even with people whose use of the English language, and willingness to stick to a single topic of discussion, is better aligned with mine than yours and Lev's seem to be.

Replies from: David_Gerard
comment by David_Gerard · 2011-12-25T00:42:43.791Z · LW(p) · GW(p)

Heh. Am I making no sense either? I'm sticking with it because I've known Lev for decades and I'm a huge fan of his and I have little to no idea what the fuck he's on about with this crazy moon language. The p-zombie argument proves - proves - magic, rather than demonstrating that philosophers are easily convinced of rubbish? What?

(I have little doubt he's thinking the same of me.)

The thread is still going, by the way. Twenty days later. None of us know when to give up.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-25T02:19:14.419Z · LW(p) · GW(p)

Well, you're making sense to me, but that's perhaps due to the fact that I basically would be saying the same things if I somehow found myself in that conversation. (Which could easily happen... hell, I've gotten into that conversation on LW.)

I think you would all benefit from drop-kicking about half the terms you're using, and unpacking them instead... it seems moderately clear that you don't agree on what a p-zombie is, for example. But I would be surprised if he agreed to that.

That said, I don't think he'd agree with your summary of his position.

Does he always have such eccentric syntax?

Replies from: David_Gerard
comment by David_Gerard · 2011-12-25T10:06:33.278Z · LW(p) · GW(p)

No, he doesn't. He's spent most of his life in politics and knows how to speak with fine-honed bluntness. That's why people thought his first comment was parody.

I've identified the feeling: that sinking realisation that someone you respected is a dualist.

comment by TimS · 2011-12-15T16:34:26.526Z · LW(p) · GW(p)

There is an unfortunate equivocation in the word theory (compare "Theory of Evolution" to "Just War Theory"). Popper says that theory can only be called scientific if it is falsifiable. Using that conceptual terminology, Freudian theory is pseudoscience, not a scientific theory. But many things that the vernacular calls theories are not falsifiable. (What would it mean to falsify utilitarian theory?)

Does that mean that we can't talk about moral theories? What word should we use instead? Because it seems like talking about moral theories is doing something productive.


For some context, I'm starting this post to separate off this conversation from a distinct conversation I'm having here

Replies from: David_Gerard, Bugmaster
comment by David_Gerard · 2011-12-21T00:00:29.359Z · LW(p) · GW(p)

And just wait until you get to "critical theory". I fear the word "theory" in English is indeed stretched in a continuous fog from the hardest of physics to the foggiest of spurious postmodernist notions, with little in the way of joins to carve it at. Thus, cross-domain equivocation will be with us for a while yet.

Replies from: TimS
comment by TimS · 2011-12-21T00:46:05.284Z · LW(p) · GW(p)

And? Invoking critical theory doesn't scare me off. I'm as post-modern as you are likely to meet here at LW.

I agree that the the word "theory" needs an adjective or it is underspecified. Scientific theories are different from moral theories. Let me repeat: If I can't talk about the "theory" of utilitarianism, what word should I use instead to capture the concept?

Replies from: David_Gerard
comment by David_Gerard · 2011-12-21T09:36:41.249Z · LW(p) · GW(p)

I think we're furiously agreeing here. I have no problem with you using the word "theory" there, but I do think some theories have more explanatory power (which I think of as "better") than others, wherever we are on the spectrum I posit from physics to fog. My interests are largely at the foggy end and how to come up with theories with explanatory power at the foggy end is something I'm presently wrestling with.

comment by Bugmaster · 2011-12-15T22:49:35.984Z · LW(p) · GW(p)

I personally would prefer to use the word "theory" to mean "a scientific theory that is, by definition, falsifiable". But it's not a strong preference; I merely think that it helps reduce confusion. As long as we make sure to define what you mean by the word ahead of time, we can use the word "theory" in the vernacular sense, as well.

Regarding moral theories, I have to admit that my understanding of them is somewhat shaky. Still, if moral theories are completely unfalsifiable, then how do we compare them to discover which is better ? And if we can't determine which moral theories are better than others, what's the point in talking about them at all ?

I said earlier that Utilitarianism is more like an algorithm than like a scientific theory; the reason I said that is because Utilitarianism doesn't tell you how to obtain the utility function. However, we can still probably say that, given a utility function, Utilitarianism is better than something like Divine Command -- or can we ? If we can, then we are implicitly looking at the results of the application of both of these theories throughout history, and evaluating them according to some criteria, which looks a lot like falsifiability. If we cannot, then what are those moral theories for ?

Replies from: thomblake, David_Gerard, TimS
comment by thomblake · 2011-12-15T23:06:43.222Z · LW(p) · GW(p)

I said earlier that Utilitarianism is more like an algorithm than like a scientific theory

It should be noted that Utilitarianism(Ethical Theory) states that the outputs of Utilitarianism(algorithm) constitute morality.

Replies from: Bugmaster
comment by Bugmaster · 2011-12-15T23:21:02.817Z · LW(p) · GW(p)

It should be noted that Utilitarianism(Ethical Theory) states that the outputs of Utilitarianism(algorithm) constitute morality.

Oh... so does Utilitarianism(Ethical Theory) actually prescribe a specific utility function ? If so, how is the function derived ? As I said, my understanding of moral theories is a bit shaky, sorry about that.

Replies from: thomblake
comment by thomblake · 2011-12-16T00:32:35.157Z · LW(p) · GW(p)

When Utilitarianism was proposed, Mill/Bentham identified it as basically "pleasure good / pain bad". Since then, Utilitarianism has pretty much become a family of theories, largely differentiated by their conceptions of the good.

One common factor of ethical theories called "Utilitarianism" is that they tend to be agent-neutral; thus, one would not talk about "an agent's utility function", but "overall net utility" (a dubious concept).

"Consequentialism" only slightly more generally refers to a family of ethical theories that consider the consequences of actions to be the only consideration for morality.

Replies from: Bugmaster
comment by Bugmaster · 2011-12-16T01:12:08.749Z · LW(p) · GW(p)

Thanks, that clears things up. But, as you said, "overall net utility" is kind of a dubious concept. I suspect that no one had figured out a way yet to compute this utility function in a semi-objective way... is that right ?

comment by David_Gerard · 2011-12-24T23:15:50.231Z · LW(p) · GW(p)

I personally would prefer to use the word "theory" to mean "a scientific theory that is, by definition, falsifiable"

So would I. But it's just an ambiguous word in English that means different things in different places. As I take it into the extremely foggy areas that also use the word "theory", I'm going for something like "has explanatory power".

comment by TimS · 2011-12-16T02:37:10.658Z · LW(p) · GW(p)

Just a quick definition here: When people say moral theory, they mean the procedure(s) they use to generate their terminal values (i.e. the ends you are trying to achieve). Instrumental values (i.e. how to achieve your goals) are much less troublesome.

if moral theories are completely unfalsifiable, then how do we compare them to discover which is better ?

I'm not sure that the consensus here is that all moral theories are unfalsifiable (although I believe that is a fact about moral theories). If theories are unfalsifiable, then comparison from some "objective" position is conceptually problematic (which I expect is why politics is the mind-killer).

And if we can't determine which moral theories are better than others, what's the point in talking about them at all ?

We still make decisions, and I think we are right to say that the decisions are "moral decisions" because they have moral consequences. Thus, one reason to discuss moral theories is to determine [as a descriptive matter] what morality one follows, in some attempt to be internally consistent.

Replies from: Bugmaster
comment by Bugmaster · 2011-12-16T03:24:05.595Z · LW(p) · GW(p)

When people say moral theory, they mean the procedure(s) they use to generate their terminal values (i.e. the ends you are trying to achieve).

Understood, thanks.

I'm not sure that the consensus here is that all moral theories are unfalsifiable (although I believe that is a fact about moral theories).

Let's go with what you believe, then, and if the consensus wants to disagree, they can chime in :-)

Thus, one reason to discuss moral theories is to determine [as a descriptive matter] what morality one follows, in some attempt to be internally consistent

Are you saying that moral theories are descriptive, and not prescriptive ? In this case, discussing moral theories is similar to discussing human psychology, or cognitive science, or possibly sociology. That makes sense to me, though I think that most people would disagree. But, again, if this is what you believe as well, then we are in agreement, and the consensus can chime in if it feels like arguing.

comment by Scott Alexander (Yvain) · 2011-12-03T03:26:56.319Z · LW(p) · GW(p)

New experiment supports evopsych idea that some out-group prejudice is related to disease risk (though I wish it had been controlled with a state of generalized non-disease stress to see whether it's just stress that increases prejudice).

comment by tgb · 2011-12-08T19:56:54.019Z · LW(p) · GW(p)

I'm interested in conducting a simple, informal study requiring a moderate number of responses to be meaningful. Specifically, I want to look at some aspects of the "wisdom of the crowd". I'm new here, so I want to ask first: is LessWrong Discussion a good place to put things like this that ask people to take a quick survey in the name of satisfying my curiosity? Are there other websites where this is appropriate?

Replies from: radical_negative_one
comment by radical_negative_one · 2011-12-14T21:33:23.390Z · LW(p) · GW(p)

I'm sorry that this comment was missed when you wrote it. But to answer your question, i say go for it! Submit it as a Discussion article and see how it goes.

Replies from: tgb
comment by tgb · 2011-12-19T21:02:00.477Z · LW(p) · GW(p)

Thanks for the reply - I'll be posting it soon. Suggestions for other places that like to participate in thing like this would also be appreciated.

comment by BrianNachbar · 2012-01-28T08:53:05.786Z · LW(p) · GW(p)

If we believe that our conscious experience is a computation, and we hold a universal prior which basically says that the multiverse consists of all Turing machines, being run in such a fashion that the more complex ones are less likely, it seems to be very suggestive for anthropics.

I visualize an array of all Turing machines, either with the simpler ones duplicated (which requires an infinite number of every finite-length machine, since there are twice as many of each machine of length n as of each machine of length n+1), or the more complex ones getting "thinner," with their outputs stretching out into the future. Next, suppose we know we've observed a particular sequence of 1's and 0's. This is where I break with Solomonoff Induction—I don't assume we've observed a prefix. Instead, assign each occurrence of the sequence anywhere in the output of any machine a probability inversely proportional to the complexity of the machine it's on (or assign them all equal probabilities, if you're imagining duplicated machines), normalized so that you get total probability one, of course. Then assign a sequence a probability of coming next equal to the sum of the probabilities of all machines where that sequence comes next.

Of course, any sequence is going to occur an infinite number of times, so each occurrence has zero probability. So what you actually have to do is truncate all the computations at some time step T, do the procedure from the previous paragraph, and hope the limit as T goes to infinity exists. It would be wonderful if you could prove it did for all sequences.

To convert this into an anthropic principle, I assume that a given conscious experience corresponds to some output sequence, or at least that things behave as if this were the case. Then you can treat the fact that you exist as an observation of a certain sequence (or of one of a certain class of sequences).

So what sort of anthropic principle does this lead to? Well, if we're talking about different possible physical universes, then after you weight them for simplicity, you weight them for how dense their output is in conscious observer-moments (that is, sequences corresponding to conscious experiences). (This is assuming you don't have more specific information. If you have red hair, and you know how many conscious experiences of having red hair a universe produces, you can weight by the density of that). So in the Presumptuous Philosopher case, where we have two physical theories differing in the size of the universe by an enormous factor, and agreeing on everything else, including the population density of conscious observers, anthropics tells us nothing. (I'm assuming that all stuff requires an equal amount of computation, or at least that computational intensity and consciousness are not too correlated. There may be room for refinement here). On the other hand, if we're deciding between two universes of equal simplicity and equal size, but different numbers of conscious observer-moments, we should weight them according to the number of conscious observer-moments as we would in SIA. In cases where someone is flipping coins, creating people, and putting them in rooms, if we regard there as being two equally simple universes, one where the coin lands heads and one where the coin lands tails, then this principle looks like SIA.

The main downside I can see to this framework is that it seems to predict that, if there are any repeating universes we could be in, we should be almost certain we're in one, and not in one that will experience heat death. This is a downside because, last I heard, our universe looks like it's headed for heat death. Maybe earlier computation steps should be weighted more heavily? This could also guarantee the existence of the limit I discussed above, if it's not already guaranteed.

comment by Solvent · 2011-12-30T10:34:11.347Z · LW(p) · GW(p)

When _ozymandias posted zir introduction post a few days ago, I went off and binged on blogs from the trans/men's rights/feminist spectrum. I found them absolutely fascinating. I've always had lots of sympathy for transgendered people in particular, and care a lot about all those issues. I don't know what I think of making up new pronouns, and I get a bit offput by trying to remember the non-offensive terms for everything. For example, I'm sure that LGBT as a term offends people, and I agree that lumping the T with the LGB is a bit dubious, but I don't know any other equivalent term that everyone will understand. I'm going to keep using it.

However, I don't currently know any LGBT people who I can talk to about these things. In particular, the whole LGBT and feminist and so on community seems to be prone to taking unnecessary offense, and believing in subjectivism and silly things like that.

So I'd really like to talk with some LWers who have experience with these things. I've got questions that I think would be better answered by an IM conversation than by just reading blogs.

If anyone wants to have an IM conversation about this, please message me. I'd be very grateful.

comment by Dorikka · 2011-12-21T00:32:10.398Z · LW(p) · GW(p)

I'm writing a discussion post titled "Another Way of Thinking About Mind-States". I'm not sure at all whether my draft explains what I'm talking about clearly enough for anyone reading it to actually understand it, so I'd appreciate a beta volunteer to take a look at it and give me some feedback. If you'd like to beta, just reply here or send me a PM. Thanks!

comment by Prismattic · 2011-12-17T01:55:51.390Z · LW(p) · GW(p)

For reasons I won't spoil ahead of time, I'm reasonably certain most LWers will really enjoy this song.

(I also expect people will enjoy this one, this one, this one, and probably most others by this songwriter, but that's less a LW-specific thing and more a general awesomeness thing.)

Replies from: wedrifid
comment by wedrifid · 2011-12-17T03:39:51.010Z · LW(p) · GW(p)

For reasons I won't spoil ahead of time, I'm reasonably certain most LWers will really enjoy this song.

Really? It seems to be a guy humiliating himself due to lack of social skills then burying his head in denial by fantasizing about science. It's degrading to those who would like to attribute their love of science to more than an inability to succeed in the conventional social world.

Ok, it gets points for the ending where he is an evil cyborg overlord rather than a hero. ;)

Replies from: Prismattic
comment by Prismattic · 2011-12-20T04:21:57.301Z · LW(p) · GW(p)

Apologies for taking so long to respond to this. I was trying to figure out what inferential distance was separating us that you would have such a divergent reaction. Here is my best attempt at explanation.

My take on the song is that its tone is ironic sincerity. That is, I take it to be self-deprecating but basically sympathetic. It's sort of like an "It gets better" video for nerds (though it predates the actual It Gets Better campaign by several years), except that the specific scenario isn't meant to be taken literally. I also don't see anything it the lyrics that suggests that social ineptitude is the only reason the protagonist likes science.

comment by Curiouskid · 2011-12-15T04:22:20.900Z · LW(p) · GW(p)

Do these open treads serve as places to make comments or ask small questions? Personally, I was reading Luke's new Q and A and I was thinking that I would like to have a thread full of people's questions. If the purpose of this thread is for comments and not questions, should we make a new recurring-monthly post?

comment by daenerys · 2011-12-15T00:01:34.955Z · LW(p) · GW(p)

Random Thought Driving Home:

I hate when all my programmed radio stations are on commercial. Why does this always happen?

Say a radio station spends 25% of it's air time playing commercials. This sounds pretty conservative. It would mean that for every 45 minutes of music, it plays 15 minutes of commercials.

I have 6 pre-programmed stations. That means if the commercials were evenly spaced out, they would only ALL have commercials on .25 ^ 6 = 0.00024 = .02% of the time.

Say I spend an hour a day driving. Then only .014 hours, or 0.86 seconds, of that time should be of all pre-programmed stations on commercials.

I am pretty sure that more than a second per hour has all-commercials, leaving me with two possible interpretations:

1) I notice the all-commercials times a lot, and so over-estimate their prominence, OR

2) Radio stations are evil, and they all play commercials at the same time (say every o'clock and -thirty) on purpose, so that the listener HAS to listen to commercials.

....Evil radio stations.

Replies from: Nornagest, Tripitaka
comment by Nornagest · 2011-12-15T00:15:52.501Z · LW(p) · GW(p)

Another option there might be that certain commercial timings emerge more or less naturally from the constraints on the problem. If radio stations schedule time blocks by hours or some 1/n division of hours, for example, there's going to be a break between segments at every N:00 -- a natural place to put commercials.

This would be a Schelling point, except that I doubt most radio stations think of commercial timing relative to other stations in terms of game-theoretic advantage.

comment by Tripitaka · 2011-12-15T00:53:34.042Z · LW(p) · GW(p)

Isn't it a matter of news being played at clear, easyly recognizable times like 00/30? Whoever wants to hear them fully WILL have to listen to at least few secounds of commercial; it seems like a common pattern to just switch it on some minuted beforehand so as not to spend extra attention to the exact time for switching on. For most people some commercials before news are an absolutely acceptable tradeoff.

comment by daenerys · 2011-12-02T20:49:54.144Z · LW(p) · GW(p)

I am writing a discussion post called "On "Friendly" Immortality" and am having some difficulties getting my thoughts fully verbalized. I would appreciate a volunteer that's willing to read it before I post and provide feedback. If you are willing to be my "beta" either reply to this comment, or send me a pm. Thank you!

Replies from: None, Barry_Cotter, Normal_Anomaly, RomanDavis, Solvent
comment by [deleted] · 2011-12-02T23:32:01.057Z · LW(p) · GW(p)

The title sounds interesting, I'd be willing to provide feedback.

Replies from: daenerys
comment by daenerys · 2011-12-03T18:56:35.270Z · LW(p) · GW(p)

Thank you both! Will work on it soon, and send it to you.

comment by Barry_Cotter · 2011-12-03T23:57:12.163Z · LW(p) · GW(p)

Beta me.

comment by Normal_Anomaly · 2011-12-03T00:49:38.486Z · LW(p) · GW(p)

I would be willing to beta.

comment by RomanDavis · 2011-12-16T06:13:38.866Z · LW(p) · GW(p)

Beta me.

comment by Solvent · 2011-12-11T02:04:34.968Z · LW(p) · GW(p)

I'll beta.