Open Thread: March 2010, part 3

post by RobinZ · 2010-03-19T03:14:04.793Z · LW · GW · Legacy · 258 comments

The previous open thread has now exceeded 300 comments – new Open Thread posts may be made here.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

258 comments

Comments sorted by top scores.

comment by ata · 2010-03-20T22:17:23.945Z · LW(p) · GW(p)

Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."

I said: "I do!"

He paused a moment and then said: "Hmm. Yeah, so do I."

I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.

comment by Rain · 2010-03-19T15:18:37.594Z · LW(p) · GW(p)

What is the appropriate method to tap out when you don't want to be thrown to the rationality mat any more?

What's the best way for me to stop a thread when I no longer wish to participate, as my emotions are turning sour, and I recognize I will begin saying bad things?

Replies from: Morendil, Kevin, CannibalSmith, RobinZ
comment by Morendil · 2010-03-19T17:16:53.506Z · LW(p) · GW(p)

May I suggest "I'm tapping out", perhaps with a link to this very comment? It's a good line (and perhaps one way the dojo metaphor is valuable).

I think in this comment you did fine. Don't sweat it if the comment that signals "I'm stopping here" is downvoted, don't try to avoid it.

In this comment I think you are crossing the "mind reading" line, where you ascribe intent to someone else. Stop before posting those.

Replies from: Rain
comment by Rain · 2010-03-19T17:37:53.486Z · LW(p) · GW(p)

I think you are crossing the "mind reading" line, where you ascribe intent to someone else. Stop before posting those.

I like mind reading. I'm good at it.

Replies from: ciphergoth, orthonormal, Morendil
comment by Paul Crowley (ciphergoth) · 2010-03-20T10:14:45.559Z · LW(p) · GW(p)

Absent statistical evidence drawn from written and dated notes, you should hold it very plausible that your impression you're good at it is due to cognitive bias. Key effects here include hindsight bias, the tendency to remember successes better than failures, the tendency to rewrite your memories after the fact so that you appear to have predicted the outcome, and the tendency to count a prediction as a success - the thousand-and-one-fold effect.

Replies from: Rain
comment by Rain · 2010-03-20T13:22:58.814Z · LW(p) · GW(p)

You're good at listing biases. I'm good at creating mental models of other people from limited information.

Absent statistical information, you should hold it very plausible that I am under the effect of biases, as I'm certainly not giving you enough data to update to the point where you should consider me good at anticipating people's actions and thoughts.

However, obtaining enough written and statistical evidence to allow you to update to the same level of belief that I hold (I would appropriately update as well), is far too difficult considering the time spans between predictions, their nature of requiring my engagement in the moment, etc.

My weak evidence is that, having subscribed to sl4 several years ago, following OB and now LW on at least a monthly basis, and having read and incorporated much of what is written here into my own practices, I still have this belief, and feel that it is very unlikely to be a wrong belief. Or perhaps you're overestimating my "good" qualifier and we're closer than we think.

At any rate, I apologize for stating a belief that I am unwilling to provide strong evidence to support.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-03-20T17:10:39.134Z · LW(p) · GW(p)

I'd update on you saying "I have good statistical evidence drawn based on written, dated notes" even if you didn't show me the evidence.

EDIT: to make this point clearer - I would update more strongly on your assurances if I could think of another likely mechanism than the one I propose by which one could gain confidence in the superiority of one's mind-reading skills.

Replies from: Rain
comment by Rain · 2010-03-20T17:28:12.730Z · LW(p) · GW(p)

It's not that sort of prediction, I don't think. It's more social and inferential, based on past and current events, and rarely works as well for the future (more than a few hours), though it does to some degree.

I don't carry a notebook with me, and oftentimes this is used in a highly social environment, so writing it down would not be appropriate or easy to do. I consider it a form of pattern matching, where I determine the thoughts and feelings of the other person through my knowledge of them and by using real-time interaction, body language, etc.

It's rapid correlation of environmental cues and developed mental models. Examples of its use: "What does it mean that they stopped talking? What does that slight glance to the left mean? What does it mean that they used that particular word? Why didn't they take action X? Why did they take action Y, but Z didn't come of it?"

I think the phrase "mind reading" is a bit much. Note the original context: "ascribing intent." I'm just using tells that I've learned over time to discern what someone else is thinking or feeling, with my own feeling as to how likely it is that I'm correct (internal, subjective bayesometer?). I've learned to trust it over time because it's been so useful and accurate.

Also note that the training period, where I initially develop the mental model, tends to consist of things like asking the other person, "What do you mean?" and then remembering their answer when a similar event comes around again. :-P

ETA: I think my pattern matching and memory skills are also what give me my wicked déjà vu. And it's likely more normal people would call this "social skills," though I seem to lack such innate capability.

comment by orthonormal · 2010-03-20T17:38:37.606Z · LW(p) · GW(p)

Even if you're good at ascribing intent to others, stating it is likely to do more harm than good. I've tried in the past to give people my analysis of why they're thinking what they're thinking. It inevitably reinforces their resistance rather than lessening it, since agreeing with my analysis would mean publicly acknowledging a character flaw.

It's much better to leave them a line of retreat, letting them think of changing their mind in terms of "updating on new evidence" rather than "admitting irrationality".

P.S. I'm not responding to the linked example, but to the general practice which I think is counterproductive.

Replies from: Rain
comment by Rain · 2010-03-20T17:44:53.655Z · LW(p) · GW(p)

I'll provisionally agree that it's not all that useful to tell people what you think of their intent. This is why I linked it as a 'bad thing' for me to say: I considered it a generally combative post, where my intent was to sneer at the other person rather than alter their behavior for the better.

I tend to get around this in real world conversations with the use of questions rather than statements, but that requires rapid back and forth. Text forums are just about the worst place to use my most-developed methods of discussion...

comment by Morendil · 2010-03-19T17:40:55.149Z · LW(p) · GW(p)

Indulge in private. :)

comment by Kevin · 2010-03-19T16:09:19.133Z · LW(p) · GW(p)

I've twice intentionally taken ~48 hours away from this site after I said something stupid. Give it a try.

Just leave the conversations hanging; come back days or weeks later if you want. Also, admitting you were wrong goes a long way if you realize you said something that was indeed incorrect, but the rationality police won't come after you if you leave a bad conversation unresolved.

comment by CannibalSmith · 2010-03-19T16:34:08.646Z · LW(p) · GW(p)

gg

Replies from: kodos96
comment by kodos96 · 2010-03-20T05:52:52.246Z · LW(p) · GW(p)

Just curious: who downvoted this, and why? I found it amusing, and actually a pretty decent suggestion. It bothers me that there seems to be an anti-humor bias here... it's been stated that this is justified in order to keep LW from devolving into a social, rather than intellectual forum, and I guess I can understand that... but I don't understand why a comment which is actually germane to the parent's question, but just happens to also be mildly amusing, should warrant a downvote.

Replies from: ata, prase, Morendil, Nick_Tarleton
comment by ata · 2010-03-20T07:01:27.138Z · LW(p) · GW(p)

Did the comment say something other than "gg" before? I'm not among those who downvoted it, but I don't know what it means. (I'd love to know why it's "amusing, and actually a pretty decent suggestion".)

Replies from: Matt_Simpson, kodos96
comment by Matt_Simpson · 2010-03-20T08:05:27.792Z · LW(p) · GW(p)

"good game"

It's sort of like an e-handshake for online gaming to acknowledge that you have lost the game - at least in the online mtg community.

Replies from: kpreid, gimpf
comment by kpreid · 2010-03-20T11:07:02.160Z · LW(p) · GW(p)

In my experience (most of which is a few years old) it is said afterward, but has its literal meaning, i.e. that you enjoyed the game, not necessarily that you lost it.

Replies from: Sniffnoy, SoullessAutomaton
comment by Sniffnoy · 2010-03-20T23:54:12.676Z · LW(p) · GW(p)

I think this depends on whether the game is one that's usually played to the end or one where one of the players usually concedes. If it's the latter, "gg" is probably a concession.

comment by SoullessAutomaton · 2010-03-20T23:01:29.003Z · LW(p) · GW(p)

A nontrivial variant is also directed sarcastically at someone who lost badly (this seems to be most common where the ambient rudeness is high, e.g., battle.net).

comment by gimpf · 2010-03-20T09:37:49.774Z · LW(p) · GW(p)

Or a handshake to start a game. It would stop being funny pretty fast if one would give up given an empty Go board ;-)

comment by kodos96 · 2010-03-20T09:12:25.651Z · LW(p) · GW(p)

Hmmm... I guess I was engaging in mind projection fallacy in assuming everyone got the reference, and the downvote was for disapproving of it, rather than just not getting it.

comment by prase · 2010-03-22T15:32:10.431Z · LW(p) · GW(p)

I have downvoted it. I had no idea what it meant (before reading the comments). Quick googling doesn't reveal that much.

comment by Morendil · 2010-03-20T09:34:22.686Z · LW(p) · GW(p)

I thought it was too short and obscure. (On KGS we say that at the start of a game. The normal end-of-game ritual is "thx". Or sometimes storming off without a word after a loss, to be lucid.)

Replies from: CannibalSmith
comment by CannibalSmith · 2010-03-20T09:43:06.996Z · LW(p) · GW(p)

Explaining it would ruin the funnies. Also, Google. Also, inevitably, somebody else did the job for me.

comment by Nick_Tarleton · 2010-03-22T15:41:36.827Z · LW(p) · GW(p)

Maybe someone thought it rude to make a humorous reply to a serious and apparently emotionally loaded question.

comment by RobinZ · 2010-03-19T15:28:33.688Z · LW(p) · GW(p)

There's the stopgap measure of asking for a rain check (e.g. "I'm sorry, but this conversation is just getting me frustrated - I'll come back tomorrow and see if I can come up with a better reply"), but I'm not sure what are the best methods to conclude a conversation. Most of the socially-accepted ones are logically rude.

comment by CannibalSmith · 2010-03-19T14:22:56.600Z · LW(p) · GW(p)

Are you guys still not tired of trying to shoehorn a reddit into a forum?

Replies from: RobinZ, Kevin, Jack
comment by RobinZ · 2010-03-19T14:28:59.421Z · LW(p) · GW(p)

I don't understand the question. What are we doing that you describe this way, and why do you expect us to be tired of it?

Replies from: jimmy, CronoDAS
comment by jimmy · 2010-03-19T23:40:47.357Z · LW(p) · GW(p)

There are a lot of open thread posts, which would be better dealt with on a forum rather than on an open thread with a reddit like system.

Replies from: RobinZ
comment by RobinZ · 2010-03-20T00:32:16.356Z · LW(p) · GW(p)

You're right, but this isn't supposed to be a forum - I think it's better to make off-topic conversations less convenient. The system seems to work adequately right now.

Replies from: nhamann
comment by nhamann · 2010-03-20T04:18:01.795Z · LW(p) · GW(p)

I suppose you're right in saying that LW isn't supposed to be a forum, but the fact remains that there is a growing trend towards more casual/off-topic/non-rationalism discussion, which seems perfectly fine to me given that we are a community of generally like-minded people. I suspect that it would be preferable to many if LW had better accommodations for these sort of interactions, perhaps something separate from main site so we could cleanly distinguish serious rationalism discussion from off-topic discussion.

Replies from: Matt_Duing
comment by Matt_Duing · 2010-03-22T23:09:07.003Z · LW(p) · GW(p)

Perhaps a monthly or quarterly lounge thread could serve this function, provided it does not become too much of a distraction.

comment by CronoDAS · 2010-03-19T22:20:12.830Z · LW(p) · GW(p)

Me neither.

comment by Kevin · 2010-03-19T15:06:53.218Z · LW(p) · GW(p)

I'm tired of it, I'd like to get a real subreddit enabled here as soon as possible.

comment by Jack · 2010-03-22T23:27:44.251Z · LW(p) · GW(p)

We could just start a forum and stop complaining about it.

comment by Liron · 2010-03-19T08:40:59.414Z · LW(p) · GW(p)

Startup idea:

We've all been waiting for the next big thing to come after Chatroulette, right? I think live video is going to be huge -- it's a whole new social platform.

So the idea is: Instant Audience. Pay $1, get a live video audience of 10 people for 1 minute. The value prop is attention.

The site probably consists of a big live video feed of the performer, and then 10 little video feeds for the audience. The audience members can't speak unless they're called on by the performer, and they can be "brought up on stage" as well.

For the performer, it's a chance to practice your speech / stand-up comedy routine / song, talk about yourself, ask people questions, lead a discussion, or limitless other possibilities (ok we are probably gonna have to deal with some suicides and jackers off).

For the audience, it's a free live YouTube. It's like going to the theater instead of watching TV, but you can still channel surf. It's a new kind of live entertainment with great audience participation.

Better yet, you can create value by holding some audience members to higher standards of behavior. There can be a reputation system, and maybe you can attend free performances to build up your Karma (by giving useful feedack without whipping it out), then companies pay more for focus groups with high-Karma users.

Apparently businesses shell out tons for focus groups; we're talking free iPod touch per person per couple hours.

I think the biggest implementation challenge is gonna be constantly having to test live video with lots of simultaneous users. But it might be worth it if you want to ride the live video web wave. There are companies who would love to run video focus groups for at least $1 per person-minute.

Replies from: mattnewport, Kevin
comment by mattnewport · 2010-03-19T15:17:23.178Z · LW(p) · GW(p)

I don't think you should charge a fixed rate per person. An auction or market would be a better way to set pricing, something like Amazon's Mechanical Turk or the Google adwords auctions.

comment by Kevin · 2010-03-19T08:46:49.458Z · LW(p) · GW(p)

I give it a solid "that could work" but the business operations are non-trivial. You probably would need someone with serious B2B sales experience, ideally already connected with the NYC-area focus group/marketing community.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-19T14:47:10.743Z · LW(p) · GW(p)

If you're charging a dollar a group, I don't think a salesperson could pay for themselves.

You could probably charge more to anyone who would otherwise have to rent a room/offer incentives/etc. but that would hurt adoption of more casual presenters, which I think you would need to keep your audience.

comment by mattnewport · 2010-03-19T18:43:06.352Z · LW(p) · GW(p)

Interesting article on an Indian rationalist (not quite in the same vein as lesswrong style rationalism but a worthy cause nonetheless). Impressive display of 'putting your money where your mouth is':

Sceptic challenges guru to kill him live on TV

When a famous tantric guru boasted on television that he could kill another man using only his mystical powers, most viewers either gasped in awe or merely nodded unquestioningly. Sanal Edamaruku’s response was different. “Go on then — kill me,” he said.

I also rather liked this response:

When the guru’s initial efforts failed, he accused Mr Edamaruku of praying to gods to protect him. “No, I’m an atheist,” came the response.

H/T Hacker News.

Replies from: Jack, FAWS
comment by Jack · 2010-03-21T09:19:10.271Z · LW(p) · GW(p)

As cool as this was there is reason to doubt it's authenticity. There doesn't seem to be any internet record of Pandit Surender Sharma "India's most powerful Tantrik" except for this TV event. Moreover, about a minute in it looks like the tantrik is starting to laugh. Maybe someone who knows the country can tell us if this Pandit Sharma fellow is really a major figure there.

I mean, what possible incentive would the guy have for going on TV to be humiliated?

Replies from: prase
comment by prase · 2010-03-22T15:16:03.304Z · LW(p) · GW(p)

what possible incentive would the guy have for going on TV to be humiliated?

Perhaps he really believed he could kill the skeptic.

comment by FAWS · 2010-03-19T18:52:04.346Z · LW(p) · GW(p)

Note: Most of the article is not about the TV confrontation so it's well worth reading even if you already heard about that in 2008.

comment by Strange7 · 2010-03-30T20:28:58.676Z · LW(p) · GW(p)

What would be the simplest credible way for someone to demonstrate that they were smarter than you?

Replies from: wedrifid, Jack, Hook, Morendil
comment by wedrifid · 2010-03-30T20:45:02.903Z · LW(p) · GW(p)

If they disagree with me and I (eventually?) agree with them, three times in a row. Applies more to questions of logic than questions of knowledge.

Replies from: RobinZ
comment by RobinZ · 2010-03-30T21:06:01.069Z · LW(p) · GW(p)

I'm not sure about the "three" or the "applies more to questions of logic than questions of knowledge", but yeah, pretty much. Smarts gets you to better answers faster.

Replies from: wedrifid
comment by wedrifid · 2010-03-30T21:16:14.003Z · LW(p) · GW(p)

I'm not sure about the throwaway 'three' either but the 'crystal vs fluid' is something that is true if I am considering "demonstrate to me..." I find that this varies a lot based on personality. What people know doesn't impress me nearly as much as seeing how they respond to new information, including how they update their understanding in response.

Replies from: RobinZ
comment by RobinZ · 2010-03-30T21:26:00.937Z · LW(p) · GW(p)

That makes sense. Those two bit are probably fairly good approximations to correct, but I can smell a possibility of better accuracy. (For example: "logic" is probably overspecific, and experience sounds like it should land on the "knowledge" side of the equation but drawing the correct conclusions from experience is an unambiguous sign of intelligence.)

I generally agree, I'm merely less-than-confident in the wording.

Replies from: wedrifid
comment by wedrifid · 2010-03-31T02:36:09.178Z · LW(p) · GW(p)

but I can smell a possibility of better accuracy.

Definitely.

(For example: "logic" is probably overspecific

Ditto.

, and experience sounds like it should land on the "knowledge" side of the equation but drawing the correct conclusions from experience is an unambiguous sign of intelligence.)

Absolutely.

I generally agree, I'm merely less-than-confident in the wording.

So am I. Improve it for me?

Replies from: RobinZ
comment by RobinZ · 2010-03-31T03:14:34.528Z · LW(p) · GW(p)

I would quickly start believing someone was smart if they repeatedly drew conclusions that looked wrong, but which I would later discover are correct. I would believe they were smarter than me if, as a rule, whenever they and I are presented with a problem, they reach important milestones in the solution or dissolution of the problem quicker than I can, even without prior knowledge of the problem.

Concrete example: xkcd #356 includes a simple but difficult physics problem. After a long time (tens of minutes) beating my head against it, letting it stew (for months, at least), and beating my head against it again (tens of minutes), I'd gotten as far as getting a wrong answer and the first part of a method. Using nothing but a verbal description of the problem statement from me, my dad pulled out the same method, noting the problem with that method which I had missed finding my wrong answer, within five minutes or so. While driving.

(I've made no progress past that insight - rot13: juvpu vf gung lbh pna (gel gb) fbyir sbe gur pheerag svryq sebz n "fbhepr" be "fvax" bs pheerag, naq gura chg n fbhepr-fvax cnve vagb gur argjbex naq nqq Buz'f-ynj ibygntrf gb trg gur erfvfgnapr - since the last time I beat my head against that problem, by the way.)

Replies from: wedrifid
comment by wedrifid · 2010-03-31T03:21:58.582Z · LW(p) · GW(p)

Bah. I was hoping your dad gave the actual answer. That's as far as I got too. :)

Replies from: RobinZ
comment by RobinZ · 2010-03-31T03:29:00.422Z · LW(p) · GW(p)

He suggested fbyivat n frevrf grez-ol-grez zvtug or arprffnel but I didn't know precisely what he meant or how to do it.

Replies from: wnoise, Cyan
comment by wnoise · 2010-03-31T04:30:48.665Z · LW(p) · GW(p)

The canonical method is to nggnpu n pheerag qevire gb rirel abqr. Jevgr qbja gur Xvepubss'f ynj ynj rirel abqr va grezf bs gur vawrpgrq pheerag, gur ibygntr ng gung ybpngvba, naq gur ibygntr ng rnpu nqwnprag cbvag. Erjevgr gur nqwnprag ibygntrf va grezf bs genafyngvba bcrengbef, gura qb n (frzv-qvfpergr) Sbhevre genafsbez (gur qbznva vf vagrtref, pbqbznva obhaqrq serdhrapvrf, fb vg'f gur bccbfvgr bs n Sbhevre frevrf), chg va gur pbaqvgvbaf sbe n havg zntavghqr fbhepr naq fvax, naq vaireg vg, juvpu jvyy tvir lbh gur ibygntrf rireljurer. Gur qvssreraprf va ibygntrf orgjrra gur fbhepr naq fvax vf gur erfvfgnapr, orpnhfr gurer vf havg pheerag sybjvat npebff gurz.

Replies from: wedrifid
comment by wedrifid · 2010-03-31T04:35:00.487Z · LW(p) · GW(p)

Buggrit. Build a grid of resistors a few meters square and pull out the multimeter.

Replies from: wnoise
comment by wnoise · 2010-03-31T05:11:49.460Z · LW(p) · GW(p)

That works fairly well, as things converge quickly.

Replies from: RobinZ
comment by RobinZ · 2010-03-31T10:58:12.643Z · LW(p) · GW(p)

Wait, so if I want to solve it myself, I shouldn't read the text in the great-grandparent of this comment?

Replies from: wnoise, cupholder
comment by wnoise · 2010-03-31T15:09:38.290Z · LW(p) · GW(p)

Well, yes, that's why I rot13d it. I'll unrot13 the beginning which will provide a clear warning.

comment by cupholder · 2010-03-31T11:32:17.198Z · LW(p) · GW(p)

I'm not wnoise, but yeah, you probably wouldn't want to read the (now great-)great-grandparent. Put it this way: the first 5 words are 'The canonical method is to.' (I read it anyway 'cuz I'm spoiler resistant. I don't think my math/EE aptitude is enough to carry out the method wnoise gives.)

Replies from: RobinZ
comment by RobinZ · 2010-03-31T14:23:23.710Z · LW(p) · GW(p)

Thanks!

comment by Cyan · 2010-03-31T04:16:53.841Z · LW(p) · GW(p)

I know little about it, but if I knew how to compute equivalent resistances beyond the basics of resistors in parallel and in series, I'd fbyir n ohapu bs rire-ynetre svavgr tevqf, fbeg bhg gur trareny rkcerffvba sbe na A-ol-Z tevq jvgu gur gnetrg abqrf nyjnlf ng gur pragre, naq gura gnxr gur yvzvg nf A naq Z tb gb vasvavgl.

Replies from: RobinZ
comment by RobinZ · 2010-03-31T10:57:08.792Z · LW(p) · GW(p)

You can try Xvepubss'f pvephvg ynjf, but at the moment I'm thinking of nffhzvat gung nyy pheeragf sybj njnl sebz zl fbhepr naq qrgrezvavat ybjre naq hccre yvzvgf ba gur pheeragf V arrq onfrq ba gubfr vardhnyvgvrf.

Replies from: wedrifid, Cyan
comment by wedrifid · 2010-03-31T11:57:25.592Z · LW(p) · GW(p)

At this rate I'm going to be proficient at reading rot13 within a week!

Replies from: RobinZ
comment by RobinZ · 2010-03-31T14:23:05.529Z · LW(p) · GW(p)

I'm intentionally not reading anything in rot13 and always using the electronic translator, with hopes that I will not become proficient.

comment by Cyan · 2010-03-31T17:18:41.844Z · LW(p) · GW(p)

The problem doesn't say anything about sources, so I'm not sure what I'm supposed to assume for voltage or current. Can you recommend a good instructional primer? I need something more that Wikipedia's infodump presentation.

Replies from: RobinZ
comment by RobinZ · 2010-03-31T18:53:37.065Z · LW(p) · GW(p)

I'm using the term as a metaphor from fluid dynamics - n fbhepr vf n abqr va juvpu pheerag vf vawrpgrq jvgubhg rkcyvpvgyl gnxvat vg bhg naljurer ryfr - orpnhfr gur flfgrz vf vasvavgr, pheerag nqqrq ng n abqr pna sybj bhg gb vasvavgl gb rssrpgviryl pbzcyrgr gur pvephvg, naq orpnhfr gur rdhngvbaf ner flzzrgevp, gur svany fbyhgvba sbe gur pheerag (sebz juvpu Buz'f ynj sbe ibygntr naq ntnva sbe rssrpgvir erfvfgnapr) pna or gubhtug bs nf gur fhz bs n cbfvgvir fbhepr ng bar raqcbvag naq n artngvir fbhepr (n fvax) ng gur bgure.

I don't know how this compares to wnoise's canonical method - it might be that this is a less effective path to the solution.

Replies from: wnoise, Cyan
comment by wnoise · 2010-03-31T20:01:08.161Z · LW(p) · GW(p)

That is definitely part of the method.

comment by Cyan · 2010-03-31T19:24:43.012Z · LW(p) · GW(p)

I can see why my idea is incompatible with your approach.

comment by Jack · 2010-03-30T23:03:26.223Z · LW(p) · GW(p)

Mathematical ability seems to be a high sensitivity test for this. I cannot recall ever meeting someone who I concluded was smarter than me who was not also able to solve and understand math problems I cannot. But it seems to have a surprisingly low specificity-- people who are significantly better at math than me (and this includes probably everyone with a degree in a math heavy discipline) are still strangely very stupid.

Hypotheses:

  1. The people who are better at math than me are actually smarter than me, I'm too dumb to realize it.
  2. Intelligence has pretty significant domain variability and I happen to be especially low in mathematical intelligence relative to everything else.
  3. My ADHD makes learning math especially hard, perhaps I'm quite good at grasping mathematical concepts but lack the discipline to pick up the procedural knowledge others have.
  4. Lots of people of smart people compartmentalize their intelligence, they can't or won't apply it to areas other than math. (Don't know if this differs from #2 except that it makes the math people sound bad instead of me)

Ideas?

Replies from: RobinZ, jimmy, Mass_Driver
comment by RobinZ · 2010-03-31T00:02:46.929Z · LW(p) · GW(p)

The easiest of your hypotheses to examine is 1: can you describe (suitably anonymized, of course) three* of these stupid math wizzes and the evidence you used to infer their stupidity?

* I picked "three" because more would be (a) a pain and (b) too many for a comment.

Replies from: Jack
comment by Jack · 2010-03-31T03:53:26.397Z · LW(p) · GW(p)

can you describe (suitably anonymized, of course) three* of these stupid math wizzes and the evidence you used to infer their stupidity

Of course the problem is the most memorable examples are also the easiest cases.

1: Dogmatic catholic, knew for a long time without ever witnessing her doing anything clever.

2: As a nuclear physicist I assume this guy is considerably better at math than I am. But this is probably bad evidence as I only know about him because he is so stupid. But there appear to be quite a few scientists and engineers that hold superstitious and irrational beliefs: witness all the credentialed creationists.

3: Instead of just picking someone, take the Less Wrong commentariat. I suspect all but a handful of the regular commenters know more math than I do. I'm not especially smarter than anybody here. Less Wrong definitely isn't dumb. But I don't feel like I'm at the bottom of the barrel either. My sense is that my intellect is roughly comparably to the average Less Wrong commenter even though my math skills aren't. I would say the same about Alicorn, for example. She seems to compare just fine though she's said she doesn't know a lot of math. Obviously this isn't a case of people being good at math and being dumb, but it is a case of people being good at math while not being definitively smarter than I am.

Replies from: RobinZ
comment by RobinZ · 2010-03-31T14:26:25.340Z · LW(p) · GW(p)

I suspect that "smarter" has not been defined with sufficient rigor here to make analysis possible.

comment by jimmy · 2010-03-31T19:31:33.720Z · LW(p) · GW(p)

I'm going with number 2 on this one (possibly a result of doing 4 either 'actively' or 'passively').

I have a very high error rate when doing basic math and am also quite slow (maybe even before accounting for fixing errors). People who's ability to understand math tops out at basic calculus can still beat me on algebra tests. This effect is increased by the fact that due to mathematica and such, I have no reason to store things like the algorithm for doing polynomial long division. It takes more time and errors to rederive it on the spot.

At the higher levels of math there were people in my classes who were significantly better at it than I, and at the time it seemed like they were just better than me at math in every way. Another classmate and I (who seem to be relative peers at 'math') would consistently be better at "big picture" stuff, forming analogies to other problems, and just "seeing" (often actually using the visual cortex) the answer where they would just crank through math and come out with the same answer 3 pages of neat handwriting later.

As of writing this, the alternative (self serving) hypothesis has come up that maybe those that I saw as really good as math weren't innately better than me (except for having lower error rate and possibly faster) at math, but had just put more effort into it and committed more tricks to memory. This is consistent with the fact that these were the kids that were very studious, though I don't know how much of the variance that explains.

comment by Mass_Driver · 2010-03-31T15:28:29.595Z · LW(p) · GW(p)

If you can't ever recall meeting someone who you concluded was smarter than you who wasn't good at X, and you didn't use any kind of objective criteria or evaluation system to reach that conclusion, then you're probably (consciously or otherwise) incorporating X into your definition of "smarter."

There's a self-promotion trap here -- you have an incentive to act like the things you're good at are the things that really matter, both because (1) that way you can credibly claim that you're at least as smart as most people, and (2) that way you can justify your decision to continue to focus on activities that you're good at, and which you probably enjoy.

I think the odds that you have fallen into this self-promotion trap are way higher than the odds for any of your other hypotheses.

If you haven't already, you may want to check out the theory of multiple intelligences and the theory of intelligence as information processing

comment by Hook · 2010-03-31T16:42:03.886Z · LW(p) · GW(p)

It's not really all that simple, and it's domain specific, but having someone take the keyboard while pair programming helped to show me that one person in particular was far smarter than me. I was in a situation where I was just trying to keep up enough to catch the (very) occasional error.

comment by Morendil · 2010-03-31T15:15:27.246Z · LW(p) · GW(p)

Teach me something I didn't know.

Replies from: wedrifid
comment by wedrifid · 2010-03-31T15:44:42.174Z · LW(p) · GW(p)

Really? You're easily impressed. I can't think of one teacher from my first 12 years of education that I am confident is smarter than me. I'd also be surprised if not a single one of the people I have taught was ever smarter than me (and hence mistaken if they apply the criteria you propose). But then, I've already expressed my preference for associating 'smart' with fluid intelligence rather than crystal intelligence. Do you actually mean 'knows more stuff' when you say 'smarter'? (A valid thing to mean FWIW, just different to me.)

Replies from: Morendil
comment by Morendil · 2010-03-31T15:54:36.632Z · LW(p) · GW(p)

They were smarter than you then, in the topic area in which you learned something from them.

When you've caught up with them, and you start being able to teach them instead of them teaching you, that's a good hint that you're smarter in that topic area.

When you're able to teach many people about many things, you're smart in the sense of being abie to easily apply your insights across multiple domains.

The smartest person I can conceive of is the person able to learn by themselves more effectively than anyone else can teach them. To achieve that they must have learned many insights about how to learn, on top of insights about other domains.

Replies from: wedrifid
comment by wedrifid · 2010-03-31T16:04:09.156Z · LW(p) · GW(p)

It sounds like you do mean (approximately) 'knows more stuff' when you say 'smarter', with the aforementioned difference in nomenclature and quite probably values to me.

Replies from: Morendil
comment by Morendil · 2010-03-31T16:22:43.447Z · LW(p) · GW(p)

I don't think that's a fair restatement of my expanded observations. It depends on what you mean by "stuff" - I definitely disagree if you substitute "declarative knowledge" for it, and this is what "more stuff" tends to imply.

If "stuff" includes all forms of insight as well as declarative knowledge, then I'd more or less agree, with the provision that you must also know the right kind of stuff, that is, have meta-knowledge about when to apply various kinds of insights.

I quite like the frame of Eliezer's that "intelligence is efficient cross-domain optimization", but I can't think of a simple test for measuring optimization power.

The demand for "the simplest credible way" sounds suspiciously like it's asking for a shortcut to assessing optimization power. I doubt that there is such a shortcut. Lacking such a shortcut, a good proxy, or so it seems to me, is to assess what a person's optimization power has gained them: if they possess knowledge or insights that I don't, that's good evidence that they are good at learning. If they consistently teach me things (if I fail to catch up to them), they're definitely smarter. So each thing they teach me is (probabilistic) evidence that they are smarter.

Hence my use of "teach me something" as a unit of evidence for someone being smarter.

Replies from: wedrifid
comment by wedrifid · 2010-04-01T00:29:11.856Z · LW(p) · GW(p)

I don't think that's a fair restatement of my expanded observations. It depends on what you mean by "stuff" - I definitely disagree if you substitute "declarative knowledge" for it, and this is what "more stuff" tends to imply.

That's reasonable. I don't mean to reframe your position as something silly, rather I say that I do not have a definition of 'smarter' for which the below is true:

They were smarter than you then, in the topic area in which you learned something from them.

When you've caught up with them, and you start being able to teach them instead of them teaching you, that's a good hint that you're smarter in that topic area.

I agree with what you say here:

The demand for "the simplest credible way" sounds suspiciously like it's asking for a shortcut to assessing optimization power. I doubt that there is such a shortcut. Lacking such a shortcut, a good proxy, or so it seems to me, is to assess what a person's optimization power has gained them: if they possess knowledge or insights that I don't, that's good evidence that they are good at learning. If they consistently teach me things (if I fail to catch up to them), they're definitely smarter. So each thing they teach me is (probabilistic) evidence that they are smarter.

..but with a distinct caveat of all else being equal. ie. If I deduce that someone has x amount of more knowledge than me then that can be evidence that they are not smarter than me if their age or position is such that they could be expected to have 2x more knowledge than me. So in the 'my teachers when I was 8' category it would be a mistake (using my definition of 'smarter') to make the conclusion: "They were smarter than you then, in the topic area in which you learned something from them".

comment by JulianMorrison · 2010-03-20T01:31:07.220Z · LW(p) · GW(p)

Just a thought about the Litany of Tarski - be very careful to recognize that the "not" is a logical negation. If the box contains not-a-diamond your assumption will likely be that it's empty. The frog that jumps out when you open it will surprise you!

The mind falls easily into oppositional pairs of X and opposite-of-X (which isn't the same as the more comprehensive not-X), and once you create categorizations, you'll have a tendency to under-consider outcomes that don't categorize.

comment by Psy-Kosh · 2010-03-19T05:01:07.224Z · LW(p) · GW(p)

Might as well right away move/call attention to the thing about the macroscopic quantum superposition here so we talk about that here.

Replies from: bogdanb
comment by bogdanb · 2010-03-19T06:43:25.361Z · LW(p) · GW(p)

I was wondering: Would something like this be expected to have any kind of visible effect?

(Their object is at the limit of bare-eye visibility in favorable lighting,* but suppose that they can expand their results by a couple orders of magnitude.)

From “first principles” I’d expect that the light needed to actually look at the thing would collapse the superposition (in the sense of first entangling the viewer with the object, so as to perceive a single version of it in every branch, and then with the rest of the universe, so each world-branch would contain just a “classical” observation).

But then again one can see interference patterns with diffracted laser light, and I’m confused about the distinction.

[eta:] For example, would coherent light excite the object enough to break the superposition, or can it be used to exhibit, say, different diffraction patterns when diffracted on different superpositions of the object?

[eta2:] Another example: it the object’s wave-function has zero amplitude over a large enough volume, you should be able to shine light through that volume just as through empty space (or even send another barely-macroscopic object through). I can’t think of any configuration where this distinguishes between the superposition and simply the object being (classically) somewhere else, though; does anyone?

(IIRC, their resonator’s size was cited as “about a billion atoms”, which turns out as a cube with .02µm sides for silicon; when bright light is shined at a happy angle, depending on the background, and especially if the thing is not cubical, you might just barely see it as a tiny speck. With an optical microscope (not bare-eyes, but still more intuitive than a computer screen) you might even make out its approximate shape. I used to play with an atomic-force microscope in college: the cantilever was about 50µm, and I could see it with ease; I don’t remember ever having seen the tip itself, which was about the scale we’re talking about, but it might have been just barely possible with better viewing conditions.)

Replies from: Mitchell_Porter, Nick_Tarleton, JamesAndrix
comment by Mitchell_Porter · 2010-03-20T09:35:26.530Z · LW(p) · GW(p)

Luboš Motl writes: "it's hard to look at it while keeping the temperature at 20 nanokelvin - light is pretty warm."

My quick impression of how this works:

You have a circuit with electrons flowing in it (picture). At one end of the circuit is a loop (Josephson junction) which sensitizes the electron wavefunctions to the presence of magnetic field lines passing through the loop. So they can be induced into superpositions - but they're just electrons. At the other end of the circuit, there's a place where the wire has a dangly hairpin-shaped bend in it. This is the resonator; it expands in response to voltage.

So we have a circuit in which a flux detector and a mechanical resonator are coupled. The events in the circuit are modulated at both ends - by passing flux through the detector and by beaming microwaves at the resonator. But the quantum measurements are taken only at the flux detector site. The resonator's behavior is inferred indirectly, by its effects on the quantum states in the flux detector to which it is coupled.

The quantum states of the resonator are quantized oscillations (phonons). A classical oscillation consists of something moving back and forth between two extremes. In a quantum oscillation, you have a number of wave packets (peaks in the wavefunction) strung out between the two extremal positions; the higher the energy of the oscillation, the greater the number of peaks. Theoretically, such states are superpositions of every classical position between the two extremes. This discussion suggests how the appearance of classical oscillation emerges from the distribution of peaks.

So you should imagine that the little hairpin-bend part of the circuit is getting into superpositions like that, in which the elements of the superposition differ by the elongation of the hairpin; and then this is all coupled to electrons in the loop at the other end of the circuit.

I think this is all quite relevant for quantum biology (e.g. proteins in superposition), where you might expect to see a coupling between current (movement of electrons) and conformation (mechanical vibration).

comment by Nick_Tarleton · 2010-03-19T16:36:55.616Z · LW(p) · GW(p)

IIRC, their resonator’s size was cited as “about a billion atoms”, which turns out as a cube with .02µm sides for silicon

Every source I've seen (e.g.) gives the resonator as flat, some tens of µm long, and containing ~a trillion atoms.

comment by JamesAndrix · 2010-03-19T06:46:17.330Z · LW(p) · GW(p)

Duh, it would be exactly like the agents in The Matrix.

comment by Alicorn · 2010-03-31T00:38:19.158Z · LW(p) · GW(p)

If you had to tile the universe with something - something simple - what would you tile it with?

Replies from: Clippy, RobinZ, Mitchell_Porter, jimrandomh, JGWeissman, Matt_Simpson, wedrifid, Document, Rain, Jack, Kevin
comment by Clippy · 2010-03-31T01:29:23.279Z · LW(p) · GW(p)

Paperclips.

comment by RobinZ · 2010-03-31T00:51:09.806Z · LW(p) · GW(p)

I have no interest in tiling the universe with anything - that would be dull. Therefore I would strive to subvert the spirit of such a restriction as effectively as I could. Off the top of my head, pre-supernova stars seem like adequate tools for the purpose.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-03-31T02:15:56.193Z · LW(p) · GW(p)

Are you sure that indiscriminately creating life in this fashion is a good thing?

Replies from: RobinZ
comment by RobinZ · 2010-03-31T02:33:57.311Z · LW(p) · GW(p)

No, but given the restrictions of the hypothetical it's on my list of possible courses of action. Were there any possibility of my being forced to make the choice, I would definitely want more options than just this one to choose from.

comment by Mitchell_Porter · 2010-03-31T00:44:03.878Z · LW(p) · GW(p)

Can the tiles have states that change and interact?

Replies from: Alicorn
comment by Alicorn · 2010-03-31T00:47:19.865Z · LW(p) · GW(p)

Only if that doesn't violate the "simple" condition.

Replies from: ata
comment by ata · 2010-03-31T01:37:17.314Z · LW(p) · GW(p)

What counts as simple?

If something capable as serving as a cell in a cellular automaton would count as simple enough, I'd choose that. And I'd design it to very occasionally malfunction and change states at random, so that interesting patterns could spontaneously form in the absence of any specific design.

Replies from: Alicorn
comment by Alicorn · 2010-03-31T01:41:07.040Z · LW(p) · GW(p)

Basically, the "simple" condition was designed to elicit answers more along the lines of "paperclips!" or "cheesecake!", rather than "how can I game the system so that I can have interesting stuff in the universe again after the tiling happens?" You're not playing fair if you do that.

I find this an interesting question because while it does seem to be a consensus that we don't want the universe tiled with orgasmium, it also seems intuitively obvious that this would be less bad than tiling the universe with agonium or whatever you'd call it; and I want to know what floats to the top of this stack of badness.

Replies from: Clippy, wedrifid, RobinZ
comment by Clippy · 2010-03-31T02:01:05.318Z · LW(p) · GW(p)

Basically, the "simple" condition was designed to elicit answers more along the lines of "paperclips!"

Mission accomplished! c=@

Now, since there seems to be a broad consensus among the posters that paperclips would be the optimal thing to tile the universe with, how about we get to work on it?

Replies from: wedrifid
comment by wedrifid · 2010-03-31T03:36:45.232Z · LW(p) · GW(p)

Hold on, we're still haven't settled on 'paperclips' over 'miniature smiley faces' and 'orgasmium'. Jury is still out. ;)

comment by wedrifid · 2010-03-31T04:56:58.294Z · LW(p) · GW(p)

Basically, the "simple" condition was designed to elicit answers more along the lines of "paperclips!" or "cheesecake!", rather than "how can I game the system so that I can have interesting stuff in the universe again after the tiling happens?" You're not playing fair if you do that.

And that is a good thing. Long live the munchkins of the universe!

comment by RobinZ · 2010-03-31T01:55:03.707Z · LW(p) · GW(p)

I think orgasmium is significantly more complex than cheesecake. Possibly complex enough that I could make an interesting universe if I were permitted that much complexity, but I don't know enough about consciousness to say.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-03-31T04:55:11.696Z · LW(p) · GW(p)

Cheesecake is made of eukaryotic life, so it's pretty darn complex.

Replies from: wedrifid, RobinZ, wnoise
comment by wedrifid · 2010-03-31T05:07:35.077Z · LW(p) · GW(p)

Hmm... a universe full of cheescake will have enough hydrogen around to form stars once the cheesecakes attract each other, with further cheescake forming to planets that are are a perfect breeding ground for life, already seeded with DNA and RNA!

comment by RobinZ · 2010-03-31T11:00:02.364Z · LW(p) · GW(p)

Didn't think of that. Okay, orgasmium is significantly more complex than paperclips.

comment by wnoise · 2010-03-31T05:29:39.345Z · LW(p) · GW(p)

What? It's products of eukaryotic life. Usually the eukaryotes are dead. Though plenty of microorganisms immediately start colonizing.

Unless you mean the other kind of cheesecake.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-03-31T19:42:41.642Z · LW(p) · GW(p)

I suppose that the majority of the cheesecake does not consist of eukaryotic cells, but there are definitely plenty of them in there. I've never looked at milk under a microscope but I would expect it to contain cells from the cow. The lemon zest contains lemon cells. The graham cracker crust contains wheat. Dead cells would not be much simpler than living cells.

comment by jimrandomh · 2010-03-31T03:35:28.024Z · LW(p) · GW(p)

If you had to tile the universe with something - something simple - what would you tile it with?

Copies of my genome. If I can't do anything to affect the utility function I really care about, then I might as well optimize the one evolution tried to make me care about instead.

(Note that I interpret 'simple' as excluding copies of my mind, simulations of interesting universes, and messages intended for other universes that simulate this one to read, any of which would be preferable to anything simple.)

comment by JGWeissman · 2010-03-31T01:51:04.674Z · LW(p) · GW(p)

I have no preferences within the class of states of the universe that do not, and cannot evolve to, contain consciousness.

But if, for example, I was put in this situation by a cheesecake maximizer, I would choose something other than cheese cake.

Replies from: Alicorn, byrnema
comment by Alicorn · 2010-03-31T04:17:40.124Z · LW(p) · GW(p)

Interesting. Just to be contrary?

Replies from: JGWeissman, wedrifid
comment by JGWeissman · 2010-03-31T04:56:54.290Z · LW(p) · GW(p)

Because, as near as I can calculate, UDT advises me too. Like what Wedrifid said.

And like Eliezer said here:

Or the Countess just decides not to pay, unconditional on anything the Baron does. Also, if the Baron ends up in an infinite loop or failing to resolve the way the Baron wants to, that is not really the Countess's problem.

And here:

As I always press the "Reset" button in situations like this, I will never find myself in such a situation.

EDIT: Just to be clear, the idea is not that I quickly shut off the AI before it can torture simulated Eliezers; it could have already done so in the past, as Wei Dai points out below. Rather, because in this situation I immediately perform an action detrimental to the AI (switching it off), any AI that knows me well enough to simulate me knows that there's no point in making or carrying out such a threat.

I am assuming that an agent powerful enough to put me in this situation can predict that I would behave this way.

comment by wedrifid · 2010-03-31T04:32:13.181Z · LW(p) · GW(p)

It is also potentialy serves decision-theoretic purposes. Much like a Dutchess choosing not to pay off her blackmailer. If it is assumed that a cheesecake maximiser has a reason to force you into such a position (rather than doing it himself) then it is not unreasonable to expect that the universe may be better off if Cheesy had to take his second option.

comment by byrnema · 2010-04-01T12:47:25.930Z · LW(p) · GW(p)

I can't recall: do your views on consciousness have a dualist component? If consciousness is in some way transcendental (that is, as a whole somehow independent or outside of the material parts), then I understand valuing it as, for example, something that has interesting or unique potential.

If you are not dualistic about consciousness, could you describe why you value it more than cheesecake?

Replies from: JGWeissman
comment by JGWeissman · 2010-04-02T01:17:42.967Z · LW(p) · GW(p)

No, I am not a dualist.

If you are not dualistic about consciousness, could you describe why you value it more than cheesecake?

To be precise, I value positive conscious experience more than cheesecake, and negative conscious experience less than cheesecake.

I assign value to things according to how they are experienced, and consciousness is required for this experience. This has to do with the abstract properties of conscious experience, and not with how it is implemented, whether by mathematical structure of physical arrangements, or by ontologically basic consciousness.

comment by Matt_Simpson · 2010-03-31T01:35:21.204Z · LW(p) · GW(p)

me

(i'm assuming I'll be broken down as part of the tiling process, so this preserves me)

Replies from: wedrifid
comment by wedrifid · 2010-03-31T04:53:01.745Z · LW(p) · GW(p)

Damn. If only I was simple, I could preserve myself that way too! ;)

comment by wedrifid · 2010-03-31T04:24:18.854Z · LW(p) · GW(p)

Witty comics. (eg)

comment by Document · 2010-04-05T17:38:26.884Z · LW(p) · GW(p)

The words "LET US OUT" in as many languages as possible.

comment by Rain · 2010-04-05T15:57:55.599Z · LW(p) · GW(p)

Isn't the universe already tiled with something simple in the form of fundamental particles?

Replies from: JGWeissman
comment by JGWeissman · 2010-04-05T16:37:02.608Z · LW(p) · GW(p)

In a tiled universe, the universe is partitioned into a grid of tiles, and the same pattern is repeated exactly in every tile, so that if you know what one tile looks like, you know what the entire universe looks like.

comment by Jack · 2010-03-31T04:35:56.906Z · LW(p) · GW(p)

A sculpture of stars, nebulae and black holes whose beauty will never be admired by anyone.

ETA: If this has too little entropy to count as simple--- well whatever artwork I can get away with I'll take.

comment by Kevin · 2010-03-31T01:31:37.651Z · LW(p) · GW(p)

Computronium

comment by NancyLebovitz · 2010-03-20T02:26:41.997Z · LW(p) · GW(p)

Here's a way to short-circuit a particular sort of head-banging argument.

Statements may seem simple, but they actually contain a bunch of presuppositions. One way an argument can go wrong is A says something, B disagrees, A is mistaken about exactly what B is disagreeing with, and neither of them can figure out why the other is so pig-headed about something obvious.

I suggest that if there are several rounds of A and B saying the same things at each other, it's time for at least one of them to pull back and work on pinning down exactly what they're disagreeing about.

comment by [deleted] · 2010-03-19T13:44:56.936Z · LW(p) · GW(p)

Survey question:

If someone asks you how to spell a certain word, does the word appear in your head as you're spelling it out for them, or does it seem to come out of your mouth automatically?

If it comes out automatically, would you describe yourself as being adept at language (always finding the right word to describe something, articulating your thoughts easily, etc.) or is it something you struggle with?

I tend to have trouble with words - it can take me a long time (minutes) to recall the proper word to describe something, and when speaking I frequently have to start a sentence 3 or 4 times to get it to come out right. (I also struggled for a while to replace the word 'automatic' in the above paragraphs with a more accurate description. I was unsuccessful.) Words also don't appear in my head when I'm spelling them aloud, which suggests to me that I might be missing some pathways that connect my language centers to my conscious functions.

Replies from: wedrifid, prase, jimrandomh, MendelSchmiedekamp, Kevin, FAWS, Rain, mattnewport, Morendil, NancyLebovitz, FAWS, Hook
comment by wedrifid · 2010-03-20T00:39:44.765Z · LW(p) · GW(p)

If someone asks you how to spell a certain word, does the word appear in your head as you're spelling it out for them, or does it seem to come out of your mouth automatically?

I never see words. I feel them.

If it comes out automatically, would you describe yourself as being adept at language (always finding the right word to describe something, articulating your thoughts easily, etc.) or is it something you struggle with?

Great with syntax. Access to specific words tends to degrade as I get fatigued or stressed. That is, I can 'feel' the word there and know the naunces of the meaning it represents but can not bring the actual sounds or letters to mind.

comment by prase · 2010-03-19T14:28:40.068Z · LW(p) · GW(p)

I have often troubles with finding proper words, both in English and my native language, but I have no problems with spelling - I can say it automatically. This may be because I have learned English by reading and therefore the words are stored in my memory in their written form, but generally I suspect, from personal experience, that ability to recall spelling and ability to find the proper word are unrelated.

comment by jimrandomh · 2010-03-19T21:23:36.762Z · LW(p) · GW(p)

I can visualize sentences, paragraphs, or formatted code, but can't zoom in as far as individual words; when I try I get a verbal representation instead. I usually can't read over misspelled words (or wrong words, like its vs. it's) without stopping. When this happens, it feels like hearing someone misparonounce a word.

When spelling a word aloud, it comes out pretty much automatically (verbal memory) with no perceptible intermediate steps. I would describe myself as adept with language.

comment by MendelSchmiedekamp · 2010-03-19T20:08:36.434Z · LW(p) · GW(p)

In retrospect, spelling words out loud, something I do tend to do with a moderate frequency, is something I've gotten much better at over the past ten years. I suspect that I've hijacked my typing skill to the task, as I tend to error correct my verbal spelling in exactly the same way. I devote little or no conscious thought or sense mode to the spelling process, except in terms of feedback.

As for my language skills, they are at least adequate. However, I have devoted special attention to improving them so I can't say that I don't share some bias away from being especially capable.

comment by Kevin · 2010-03-19T18:09:20.824Z · LW(p) · GW(p)

I'm adept at language and I never visualize letters or words in my head. I think in pronounced/internally spoken words, so when I spell something aloud I think the letters to myself as I am saying them.

comment by FAWS · 2010-03-19T18:06:14.594Z · LW(p) · GW(p)

This is turning interesting:

Sensory type of access to spelling information by poster:
hegemonicon: verbal (?) ( visual only with great difficulty)
Hook: mechanical
FAWS: mechanical, visual
prase: verbal (???)
NancyLebovitz: visual
Morendil: visual
mattnewport: visual, mechanical (?)
Rain: mechanical (???)
Kevin: verbal (???) (never visual)

Is there anyone who doesn't fall into at least one of those three categories?

comment by Rain · 2010-03-19T17:47:41.188Z · LW(p) · GW(p)

When I spell out a word, I don't visualize anything. Using words in conversation, typing, or writing is also innate - they flow through without touching my consciousness. This is another aspect of my maxim, "my subconscious is way smarter than I am." It responds quickly and accurately, at any rate.

I consider myself to be adept at the English language, and more objective evidence bears that out. I scored 36/36 on the English portion of the ACT, managed to accumulate enough extra credit through correcting my professor in my college level writing class that I didn't need to take the final, and many people have told me that I write very well in many different contexts (collaborative fiction, business reports, online forums, etc.).

I would go so far as to say that if I make an effort to improve on my communication by the use of conscious thought, I do worse than when I "feel it out."

comment by mattnewport · 2010-03-19T17:10:48.066Z · LW(p) · GW(p)

I have pretty good language skills and I think I am above average at both spelling in my own writing and spotting spelling mistakes when reading but I do not find it particularly easy to spell a difficult word out loud, it is a relatively effortful process unlike the process when reading or writing which is largely automatic and effortless. With longer words I feel like short term memory limitations make it difficult to spell the word out, for a difficult word I try to visualize the text and 'read off' the spelling but that can be taxing for longer words. I may end up having to write it down in order to be sure the spelling is correct and to be able to read it out.

Growing up in England I was largely unaware of the concept of a spelling bee so this is not a skill I ever practiced to any great extent.

comment by Morendil · 2010-03-19T16:42:43.551Z · LW(p) · GW(p)

My experience of spelling words is quite visual (in contrast to my normal thinking style, which suggests that if "thinking styles" exist they are not monolithic), I literally have the visual representation of the word floating in my head. (I can tell it really is visual because I can give details, such as what kind of font - serif - or what color - black - they are the words as they'd appear in a book.)

I'd also describe my spelling skill as "automatic", i.e. I can usually instantly spot whether a word is "right" or "not right", I absolutely cannot stand misspellings (including mine, I have the hardest time when writing fast because I must instantly go back and correct any typos, rather than let them be), and they tend to leap out of the page; most people appear to have an ability to ignore typos that I lack. (For instance, I often get a kick out of spotting typos on the freakin' front page of national magazines, and when I point them out I mostly get blank stares or "Oh, you're right" - people just don't notice!)

I'd self-describe as adept at language.

(ETA: upvoted for a luminous question.)

Replies from: None
comment by [deleted] · 2010-03-19T17:25:52.913Z · LW(p) · GW(p)

After a bit of self-experimentation, I've concluded that I almost (but not quite) completely lack any visual experience accompanying anything verbal. Even when I self-prompt, telling myself to spell a word, nothing really appears by default (though I can make an image of the word appear with a bit of focus, it's very difficult to try to 'read' off of it).

I wonder how typical (or atypical) this is.

Replies from: h-H, NancyLebovitz
comment by h-H · 2010-03-20T20:32:59.365Z · LW(p) · GW(p)

quite typical.

comment by NancyLebovitz · 2010-03-19T17:43:18.153Z · LW(p) · GW(p)

Do you get any visual images when you read?

Replies from: None
comment by [deleted] · 2010-03-19T18:09:54.014Z · LW(p) · GW(p)

Not generally, no, for either fiction or non-fiction. This may be why i've never been able to relate to the sense of getting 'lost' inside a book - they've never been as evocative for me as they seem to for others.

comment by NancyLebovitz · 2010-03-19T14:49:05.614Z · LW(p) · GW(p)

If I'm trying to spell a word out loud and it's difficult for me, it appears in my head, but not necessarily as a whole, and I'll be doing checking as I go.

This is interesting, I'd swear the words are appearing more clearly in my mind now that you've brought this up.

I'm pretty adept at straightforward language. I can usually find the words I want, and if I'm struggling to find a word, sometimes it's a memory glitch, but it's more likely to be pinning down a meaning.

Sometimes I can't find a word, and I blame the language. There doesn't seem to be an English word which includes planning as part of an action.

I do creative work with words on a small scale-- coming up with button slogans, and sometimes I'm very fast. (Recent examples: "Life is an adventure-- bring a webcam", "God is watching. It's a good thing He's easily amused.")

"Effortlessly" might be a better word than "automatically", but it took me a couple of minutes to think of it.

Writers typically need to revise. You might be able to become more facile, but it's also possible that you're imagining that other writers have it easier than you do.

Also, you may be slowing yourself down more than necessary by trying to get it right at the beginning. Many people find they're better off letting the very first draft just flow, and then do the editing.

I use a mixed strategy-- I edit somewhat on the first pass, aiming for satisfactory, and then I go over it a time or two for improvements.

I suspect that really fast writers (at least of the kind I see online) are doing most of their writing by presenting things they've already thought out.

Replies from: None
comment by [deleted] · 2010-03-19T16:13:05.718Z · LW(p) · GW(p)

Yeah, it's difficult to seperate out what's related to abstract thought (as opposed to language), what's typical a typical language difficulty, and what's a quirk of my particular brain.

It is somewhat telling that your (and everyone elses) response doesn't fit neatly into my 'appears in your head or not' dichotomy.

comment by FAWS · 2010-03-19T14:08:04.622Z · LW(p) · GW(p)

If someone asks you how to spell a certain word, does the word appear in your head as you're spelling it out for them, or does it seem to come out of your mouth automatically?

Neither, really. For simple frequent words I remember each letter individually, but otherwise I either have to write it down using the mechanical memory to retrieve the spelling or I have to look at the complete word to test whether it looks right. I can test a spelling by imagining how it looks, but that's not as reliable as seeing it with my eyes, and of course writing it down and then looking at it is always best (short of looking it up of course).

comment by Hook · 2010-03-19T13:54:50.590Z · LW(p) · GW(p)

Spelling a word out loud is an infrequent task for me. I have to simulate writing or typing it and then dictate the result of that simulation. I would characterize myself as adept at language. Choosing the appropriate words comes easily to me, and I don't think this skill is related to spelling bee performance.

Replies from: None
comment by [deleted] · 2010-03-19T14:01:09.066Z · LW(p) · GW(p)

My focus isn't so much on the spelling per se, but how much conscious thought 'comes along for the ride' while its being done.

comment by Kevin · 2010-03-28T05:34:10.626Z · LW(p) · GW(p)

If any aspiring rationalists would like to try and talk a Stage IV cancer patient into cryonics... good luck and godspeed. http://www.reddit.com/r/IAmA/comments/bj3l9/i_was_diagnosed_with_stage_iv_cancer_and_am/c0n1kin?context=3

Replies from: Kevin
comment by Kevin · 2010-03-29T02:16:56.839Z · LW(p) · GW(p)

I tried, it didn't work. Other people can still try! I didn't want to give the hardest possible sell because survival rates for Stage IV breast cancer are actually really good.

comment by Strange7 · 2010-03-25T21:48:18.542Z · LW(p) · GW(p)

Nature doesn't grade on a curve, but neither does it punish plagiarism. Is there some point at which someone who's excelled beyond their community would gain more by setting aside the direct pursuit of personal excellence in favor of spreading what they've already learned to one or more apprentices, then resuming the quest from a firmer foundation?

Replies from: Morendil
comment by Morendil · 2010-03-26T07:08:38.462Z · LW(p) · GW(p)

Teaching something to others is often a way of consolidating the knowledge, and I would argue that the pursuit of personal excellence usually requires sharing the knowledge at some point, and possibly on an ongoing basis.

See e.g. Lave and Wenger's books on communities of practice and "learning as legitimate peripheral participation".

comment by [deleted] · 2010-03-24T04:56:31.531Z · LW(p) · GW(p)

I really should probably think this out clearer, but I've had an idea a few days now that keeps leaving and coming back. So I'm going to throw the idea out here and if it's too incoherent, I hope either someone gets where I'm going or I come back and see my mistake. At worst, it gets down-voted and I'm risking karma unless I delete it.

Okay, so the other day I was discussing with a Christian friend who "agrees with micro-evolution but not macro-evolution." I'm assuming other people have heard this idea before. And I started to think about the idea of that comment, and the overarching general view of evolution, and the main differences between macro- and micro-evolution. How could one accept the idea that genes change slowly over time, thus creating slightly different organisms than their predecessors, but different species couldn't develop because of this? My thinking led me to this theory: Could it be possible that someone making this comment is making the error of the Mind Projection Fallacy? Rather than assuming species is a category in which we separate different organisms, words like "fish" and "bear" we use so we don't have to label everything by their exact genes, could they be assuming species is a part of the world itself, the same way genes are, and thus couldn't be changed?

If anyone thinks this is a possible idea, would you have an idea how to point this out to the commenter? If you don't think this is a good theory, would you explain why?

Replies from: RobinZ, rwallace, Nisan
comment by RobinZ · 2010-03-24T10:29:54.576Z · LW(p) · GW(p)

I'm sure it's a factor, but I suspect "it contradicts my religion" is the larger.

Assuming that's not it: how often do mutations happen, how much time has passed, and how many mutations apart are different species? The first times the second should dwarf the third, at which point it's like that change-one-letter game. Yes, every step must be a valid word, but the 'limit' on how many tries is so wide that it's easy.

comment by rwallace · 2010-03-31T11:20:02.961Z · LW(p) · GW(p)

Sounds likely to me. I don't know exactly what wording I'd use, but some food for thought: when Alfred Wallace independently rediscovered evolution, his paper on the topic was titled On the Tendency of Varieties to Depart Indefinitely from the Original Type. You can find the full text at http://www.human-nature.com/darwin/archive/arwallace.html - it's short and clear, and from my perspective offers a good approach to understanding why existing species are not ontologically fundamental.

comment by Nisan · 2010-03-29T01:34:08.486Z · LW(p) · GW(p)

That's a good idea; it's tempting to believe that a category is less fuzzy in reality than it really is. I would point out recent examples of speciation including the subtle development of the apple maggot, and fruit fly speciation in laboratory experiments. If you want to further mess with their concept of species, tell them about ring species (which are one catastrophe away from splitting into two species).

comment by Cyan · 2010-03-20T01:42:13.098Z · LW(p) · GW(p)

Daniel Dennett and Linda LaScola have written a paper about five non-believing members of the Christian clergy. Teaser quote from one of the participants:

I think my way of being a Christian has many things in common with atheists as [Sam] Harris sees them. I am not willing to abandon the symbol ‘God’ in my understanding of the human and the universe. But my definition of God is very different from mainline Christian traditions yet it is within them. Just at the far left end of the bell shaped curve.

comment by pjeby · 2010-03-20T00:22:38.923Z · LW(p) · GW(p)

Don't know if this will help with cryonics or not, but it's interesting:

Induced suspended animation and reanimation in mammals (TED Talk by Mark Roth)

[Edited to fix broken link]

comment by SilasBarta · 2010-03-31T22:03:07.746Z · LW(p) · GW(p)

Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.

It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.

Replies from: Bo102010
comment by Bo102010 · 2010-03-31T23:42:21.477Z · LW(p) · GW(p)

From the site:

I define a "Bizarre Domain" as a problem domain that has all of these four properties: It is Chaotic, it requires a Holistic Stance, it contains Ambiguity, and it exhibits Emergent Properties. I examine sixteen kinds of problems that fall into these four categories.

Man, you just know it's going to be a fun read...

Replies from: khafra
comment by khafra · 2011-09-27T19:20:42.072Z · LW(p) · GW(p)

I didn't notice this thread, but ran across Anderson on a facebook group and asked about her site in another thread. JoshuaZ wrote a good analysis.

comment by wedrifid · 2010-03-31T04:50:24.328Z · LW(p) · GW(p)

I know it's no AI of the AGI kind but what do folks think of this? It certainly beats the pants off any of the stuff I was doing my AI research on...

Replies from: Mass_Driver, Richard_Kennaway
comment by Mass_Driver · 2010-03-31T15:11:44.637Z · LW(p) · GW(p)

Looks like a step in the right direction -- kind of obvious, but you do need both probabilistic reasoning and rules to get reality-interpretation.

comment by Richard_Kennaway · 2010-03-31T09:12:02.358Z · LW(p) · GW(p)

Looks like the usual empty promises to me.

comment by CronoDAS · 2010-03-23T23:54:55.773Z · LW(p) · GW(p)

What's the best way to respond to someone who insists on advancing an argument that appears to be completely insane? For example, someone like David Icke who insists the world is being run by evil lizard people? Or your friend the professor who thinks his latest "breakthrough" is going to make him the next Einstein but, when you ask him what it is, it turns out to be nothing but gibberish, meaningless equations, and surface analogies? (My father, the professor, has a friend, also a professor, who's quickly becoming a crank on the order of the TimeCubeGuy.) Or, say, this particular bit of incoherent political ranting?

Replies from: wnoise, CannibalSmith
comment by wnoise · 2010-03-24T01:33:51.914Z · LW(p) · GW(p)

If you have no emotional or other investment, the best thing to do is not engage.

Replies from: CronoDAS
comment by CronoDAS · 2010-03-24T04:05:30.588Z · LW(p) · GW(p)

http://xkcd.com/154/

Replies from: wnoise
comment by wnoise · 2010-03-24T04:07:42.444Z · LW(p) · GW(p)

Well, yes, the "if" is critical.

comment by CannibalSmith · 2010-03-24T14:22:45.246Z · LW(p) · GW(p)

When rational argument fails, fall back to dark arts. If that fails, fall back to damage control (discredit him in front of others). All that assuming it's worth the trouble.

comment by JamesAndrix · 2010-03-19T14:28:39.184Z · LW(p) · GW(p)

I have a line of thinking that makes me less worried about unfriendly AI. The smarter an AI gets, the more it is able to follow its utility function. Where the utility function is simple or the AI is stupid, we have useful things like game opponents.

But as we give smarter AI's interesting 'real world' problems, the difference between what we asked for and what we want shows up more explicitly. Developers usually interpret this as the AI being stupid or broken, and patch over either the utility function or the reasoning it led to. These patches don't lead to extra intelligence because that would just make the AI appear more broken.

If it is the case that there is a big gap between AI's that are smart enough for their non-human utility functions to be annoyingly evident and AI's that are smart enough to improve themselves, then non-friendly AGI research will hit an apparent wall where all avenues are unproductive. (If This line of thought is correct, it hit that wall a long time ago.)

This is not a guarantee. There is always a chance someone will hit on a self-improving system.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-03-19T17:02:40.006Z · LW(p) · GW(p)

Where the utility function is simple or the AI is stupid, we have useful things like game opponents.

Rather, where the utility function is simple AND the program is stupid. Paperclippers are not useful things.

If it is the case that there is a big gap between AI's that are smart enough for their non-human utility functions to be annoyingly evident and AI's that are smart enough to improve themselves, then non-friendly AGI research will hit an apparent wall where all avenues are unproductive.

Reinforcement-based utility definition plus difficult games with well-defined winning conditions seems to constitute a counterexample to this principle (a way of doing AI that won't hit the wall you described). This could function even on top of supplemental ad-hoc utility function building, as in chess, where a partially hand-crafted utility function over specific board positions is an important component of chess-playing programs -- you'd just need to push the effort to a "meta-AI" that is only interested in the real winning conditions.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-19T19:41:55.184Z · LW(p) · GW(p)

Rather, where the utility function is simple AND the program is stupid. Paperclippers are not useful things.

I was thinking of current top chess programs as smart(well above average humans), with simple utility functions.

Reinforcement-based utility definition plus difficult games with well-defined winning conditions seems to constitute a counterexample to this principle (a way of doing AI that won't hit the wall you described).

This is a good example, but it might not completely explain it away.

Can we, by hand or by algorithm, construct a utility function that does what we want, even when we know exactly what we want?

I think you could still have a situation in which a smarter agent does worse because it's learned utility function does not match the winning conditions (it's learned utility function would constitute a created subgoal of "maximize reward")

Learning about the world and constructing subgoals would probably be part of any near-human AI. I don't think we have a way to construct reliable subgoals, even with a rules-defined supergoal and perfect knowledge of the world. (such a process would be a huge boon for FAI)

Likewise, I don't think we can be certain that the utility functions we create by hand would reliably lead a high-intelligence AI to seek the goal we want, even for well-defined tasks.

A smarter agent might have the advantage of learning the winning conditions faster, but if it is comparatively better at implementing a flawed utility function than it is at fixing it's utility function, then could be outpaced by stupider versions, and you're working more in an evolutionary design space.

So I think it would hit the same kind of wall, at least in some games.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-03-19T20:29:47.454Z · LW(p) · GW(p)

I meant the AI to be limited to the formal game universe, which should be easily feasible for non-superintelligent AIs. In this case, smarter agents always have an advantage, maximization of reward is the same as the intended goal.

A smarter agent might have the advantage of learning the winning conditions faster, but if it is comparatively better at implementing a flawed utility function than it is at fixing it's utility function, then could be outpaced by stupider versions, and you're working more in an evolutionary design space.

Thinking deeply until you get eaten by a sabertooth is not smart.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-19T23:43:13.557Z · LW(p) · GW(p)

Answer is here, thinking out loud is below

If you give the AI a perfect utility function for a game, it still has to break down subgoals and seek those. You don't have a good general theory for making sure your generated subgoals actually serve your supergoals, but you've tweaked things enough that it's actually very good at achieving the 'midlevel' things.

When you give it something more complex, it improperly breaks down the goal into faulty subgoals that are ineffective or contradictory, and then effectively carries out each of them. This yields a mess.

At this point you could improve some of the low level goal-achievement and do much better at a range of low level tasks, but this wouldn't buy you much in the complex tasks, and might just send you further off track.

If you understand that the complex subgoals are faulty, you might be able to re-patch it, but this might not help you solve different problems of similar complexity, let alone more complex problems.

What led me to this answer:

Thinking deeply until you get eaten by a sabertooth is not smart.

There may not be a trade off at play here. For example: At each turn you give the AI indefinite time and memory to learn all it can from the information it has so far, and to plan. (limted by your patience and budget, but let's handwave that computation resources are cheap, and every turn the AI comes in well below it's resource limit.)

You have a fairly good move optimizer that can achieve a wide range of in game goals, and a reward modeler that tries to learn what it is supposed to do and updates the utility function.

I meant the AI to be limited to the formal game universe, which should be easily feasible for non-superintelligent AIs. In this case, smarter agents always have an advantage, maximization of reward is the same as the intended goal.

But how do they know how to maximize reward? I was assuming they have to learn the reward criteria. If they have a flawed concept of that criteria, they will seek non-reward.

If the utility function is one and the same as winning, then the (see Top)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-03-20T10:10:55.746Z · LW(p) · GW(p)

End-of-conversation status:

I don't see a clear argument, and failing that, I can't take confidence in a clear lawful conclusion (AGI hits a wall). I don't think this line of inquiry is worthwhile.

comment by ata · 2010-03-31T01:32:48.975Z · LW(p) · GW(p)

I'm looking for a quote I saw on LW a while ago, about people who deny the existence of external reality. I think it was from Eliezer, and it was something like "You say nothing exists? Fine. I still want to know how the nothing works."

Anyone remember where that's from?

Replies from: SilasBarta
comment by SilasBarta · 2010-03-31T01:47:04.652Z · LW(p) · GW(p)

Coincidentally, I was reading the quantum non-realism article when writing my recent understanding your understanding article, and that's where it's from -- though he mentions it actually happened in a previous discussion and linked to it.

The context in the LW version is:

My attitude toward questions of existence and meaning was nicely illustrated in a discussion of the current state of evidence for whether the universe is spatially finite or spatially infinite, in which James D. Miller chided Robin Hanson:

"Robin, you are suffering from overconfidence bias in assuming that the universe exists. Surely there is some chance that the universe is of size zero."

To which I replied:

"James, if the universe doesn't exist, it would still be nice to know whether it's an infinite or a finite universe that doesn't exist."

Ha! You think pulling that old "universe doesn't exist" trick will stop me? It won't even slow me down!

It's not that I'm ruling out the possibility that the universe doesn't exist. It's just that, even if nothing exists, I still want to understand the nothing as best I can.

(I was actually inspired by that to say something similar in response to an anti-reductionist's sophistry on another site, but that discussion's gone now.)

Replies from: ata
comment by ata · 2010-03-31T01:53:01.888Z · LW(p) · GW(p)

Ah, thanks.

comment by Jonii · 2010-03-29T00:14:40.860Z · LW(p) · GW(p)

Hello. Do people here generally take anthropic principle as strong evidence against positive singularity? If we take it that in the future it would be good to have many, happy people, like, using most matter available to make sure that this happens, we'd get really many happy people. However, we are not any one of those happy people. We're living in pre-singularity times, and this seems to be strong evidence that we're going to face a negative singularity.

Replies from: Kevin
comment by Kevin · 2010-03-29T02:03:41.926Z · LW(p) · GW(p)

The simulation argument muddles the issue from my perspective. There's more to weigh than just the anthropic principle.

Replies from: Jonii
comment by Jonii · 2010-03-29T03:11:26.975Z · LW(p) · GW(p)

How?

comment by Nick_Tarleton · 2010-03-25T22:23:18.811Z · LW(p) · GW(p)

This is pretty pathetic, at least if honestly reported. (A heavily reported study's claim to show harmful effects from high-fructose corn syrup in rats is based on ambiguous, irrelevant, or statistically insignificant experimental results.)

Replies from: RobinZ
comment by RobinZ · 2010-03-26T02:20:18.578Z · LW(p) · GW(p)

I'm reading the paper now, and I see in the "Methods" section:

We selected these schedules to allow comparison of intermittent and continuous access, as our previous publications show limited (12 h) access to sucrose precipitates binge-eating behavior (Avena et al., 2006).

which the author of the blog post apparently does not acknowledge. I'll grant that the study may be overblown, but it is not as obviously flawed as I believe the blogger suggested.

comment by murat · 2010-03-20T22:21:10.186Z · LW(p) · GW(p)

How do Bayesians look at formal proofs in formal specifications? Do they believe "100%" in them?

Replies from: ata
comment by ata · 2010-03-20T22:26:14.863Z · LW(p) · GW(p)

You can believe that it leads to a 100%-always-true-in-every-possible-universe conclusion, but the strength of your belief should not be 100% itself. The difference is crucial. Good posts on this subject are How To Convince Me That 2 + 2 = 3 and Infinite Certainty. (The followup, 0 And 1 Are Not Probabilities, is a worthwhile explanation of the mathematical reasons that this is the case.)

Replies from: murat
comment by murat · 2010-03-21T11:05:18.403Z · LW(p) · GW(p)

Thank you for the links. It makes sense now.

comment by dclayh · 2010-03-20T19:35:45.503Z · LW(p) · GW(p)

Cryonics in popular culture:

comment by Vive-ut-Vivas · 2010-03-20T01:31:27.317Z · LW(p) · GW(p)

A popular-level critique of frequentism that may be of interest.

Replies from: cousin_it
comment by cousin_it · 2010-03-22T14:17:50.770Z · LW(p) · GW(p)

I still fail to see how Bayesian methods eliminate fluke results or publication bias.

comment by NancyLebovitz · 2010-03-19T09:14:39.246Z · LW(p) · GW(p)

Is independent AI research likely to continue to be legal?

At this point, very few people take the risks seriously, but that may not continue forever.

This doesn't mean that it would be a good idea for the government to decide who may do AI research and with what precautions, just that it's a possibility

If there's a plausible risk, is there anything specific SIAI and/or LessWrongers should be doing now, or is building general capacity by working to increase ability to argue and to live well (both the anti-akrasia work and luminosity) the best path?

Replies from: khafra, Kevin
comment by khafra · 2010-03-19T13:34:26.184Z · LW(p) · GW(p)

Outlawing AI research was successful in Dune, but unsuccessful in Mass Effect. But I've never seen AI research fictionally outlawed until it's done actual harm, and I seen no reason to expect a different outcome in reality. It seems a very unlikely candidate for the type of moral panic that tends to get unusual things outlawed.

Replies from: SilasBarta, NancyLebovitz
comment by SilasBarta · 2010-03-19T14:10:56.643Z · LW(p) · GW(p)

Fictional evidence should be avoided. Also, this subject seems very prime for a moral panic, i.e., "these guys are making Terminator".

Replies from: h-H
comment by h-H · 2010-03-20T20:41:38.094Z · LW(p) · GW(p)

how would it be stopped if it were illegal? unless information tech suddenly goes away it's impossible.

Replies from: khafra
comment by khafra · 2010-03-21T22:01:44.532Z · LW(p) · GW(p)

NancyLebovitz wasn't suggesting that the risks of UFAI would be averted by legislation; rather, that such legislation would change the research landscape, and make it harder for SIAI to continue to do what it does--preparation would be warranted if such legislation were likely. I don't think it's likely enough to be worth dedicating thought and action to, especially thought and action which would otherwise go toward SIAI's primary goals.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-03-21T23:12:26.828Z · LW(p) · GW(p)

Bingo. That's exactly what I was concerned about.

You're probably right that there's no practical thing to be done now. I'm sure you're know very quickly if restrictions on independent AI research are being considered.

The more I think about it, the more I think a specialized self-optimizing AI (or several such, competing with each other) could do real damage to the financial markets, but I don't know if there are precautions for that one.

comment by NancyLebovitz · 2010-03-19T14:30:49.855Z · LW(p) · GW(p)

I've been thinking about that, and I believe you're right that laws typically don't get passed against hypothetical harms, and also that AI research isn't the kind of thing that's enough fun to think about to set off a moral panic.

However, I'm not sure whether real harm that society can recover from is a possibility.

I'm basing the possibility on two premises-- that a lot of people thinking about AI aren't as concerned about the risks as SIAI, and computer programs are frequently gotten to the point where they work somewhat.

Suppose that a self-improving AI breaks the financial markets-- there might just be efforts to protect the markets, or AI might be an issue in itself.

Replies from: cousin_it
comment by cousin_it · 2010-03-22T14:35:18.494Z · LW(p) · GW(p)

laws typically don't get passed against hypothetical harms

Witchcraft? Labeling of GM food?

Replies from: NancyLebovitz, RobinZ
comment by NancyLebovitz · 2010-03-22T15:04:01.874Z · LW(p) · GW(p)

Those are legitimate examples. I think overreaction to rare events (like the difficulties added to travel and the damage to the rights of suspects after 9/11) is more common, but I can't prove it.

comment by RobinZ · 2010-03-22T14:49:32.582Z · LW(p) · GW(p)

Some kinds of GM food cause different allergic reactions than their ancestral cultivars. I think you can justifiably care to a similar extent as you care about the difference between a Gala apple and a Golden Delicious apple.

Edit: Granted, most of the reaction is very much overblown.

comment by Kevin · 2010-03-19T14:19:21.712Z · LW(p) · GW(p)

I'm pretty sure Eliezer commented publicly on this and I think his answer was that it doesn't make sense to outlaw AI research.

10 free karma to whoever can find the right link.

Replies from: Vladimir_Nesov, Nick_Tarleton
comment by Vladimir_Nesov · 2010-03-19T20:37:58.206Z · LW(p) · GW(p)

The question was, "Is independent AI research likely to continue to be legal?". What Eliezer considers a reasonable policy isn't necessarily related to what government considers a reasonable policy. Though I think the answer to both questions is the same, for unrelated reasons.

comment by Nick_Tarleton · 2010-03-19T16:32:07.095Z · LW(p) · GW(p)

AI as a Positive and Negative Factor in Global Risk (section 10) discusses this. More obsoletely, so do CFAI (section 4) and several SL4 posts (e.g. this thread from 2003).

comment by Kevin · 2010-03-19T08:26:35.224Z · LW(p) · GW(p)

First Clay Millennium Prize goes to Grigoriy Perelman

http://news.ycombinator.com/item?id=1202591

Replies from: Singularity7337
comment by Singularity7337 · 2010-03-19T23:19:47.693Z · LW(p) · GW(p)

Is there a particular reason you linked to a blog instead of the actual story? How much money did you net from that decision?

Replies from: mattnewport, Kevin
comment by mattnewport · 2010-03-19T23:21:42.735Z · LW(p) · GW(p)

Hacker News isn't really a blog and it's certainly not Kevin's blog unless Paul Graham is posting here under a extremely deceptive user name.

comment by Kevin · 2010-03-20T04:45:38.886Z · LW(p) · GW(p)

Because at their best, Hacker News discussions are more useful than the actual story.

Hacker News isn't my site and it is effectively run as a non-profit anyways.

comment by MBlume · 2010-03-22T20:56:54.656Z · LW(p) · GW(p)

Tricycle has a page up called Hacking on Less Wrong which describes how to get your very own copy of Less Wrong running on your computer. (You can then invite all your housemates to register and then go mad with power when you realize you can ban/edit any of their comments/posts. Hypothetically, I mean. Ahem.)

I've updated it a bit based on my experience getting it to run on my machine. If I've written anything terribly wrong, someone let me know =)

Replies from: Jack
comment by Jack · 2010-03-22T21:34:50.092Z · LW(p) · GW(p)

This would be an interesting classroom tool.

comment by Kevin · 2010-03-21T21:52:31.395Z · LW(p) · GW(p)

Nanotech robots deliver gene therapy through blood

http://www.reuters.com/article/idUSTRE62K1BK20100321

comment by Kevin · 2010-03-21T03:42:22.925Z · LW(p) · GW(p)

What Would You Do With 48 Cores? (essay contest)

http://blogs.amd.com/work/2010/03/03/48-cores-contest/

Replies from: bogus, CannibalSmith
comment by bogus · 2010-03-21T04:06:23.476Z · LW(p) · GW(p)

That's actually a very interesting question. You'd want a problem which:

  1. is either embarrassingly parallel or large enough to get a decent speedup,
  2. involves a fair amount of complex branching and logic, such that GPGPU would be unsuitable,
  3. cannot be efficiently solved by "shared nothing", message-passing systems, such as Beowulf clusters and grid computing.

The link also states that the aim should be "to help society, to help others" and to "make the world a better, more interesting place". Here's a start; in fact, many of these problems are fairly relevant to AI.

comment by CannibalSmith · 2010-03-24T14:13:25.834Z · LW(p) · GW(p)

Finally get to play Crysis.

Write a real time ray tracer.

comment by wedrifid · 2010-03-31T05:22:55.494Z · LW(p) · GW(p)

Random observation: type in the first few letters of 'epistemic' and google goes straight to suggesting 'epistemological anarchism'. It seems google is right on board with helping SMBC further philosophical education.

comment by Alex Flint (alexflint) · 2010-03-29T22:34:27.633Z · LW(p) · GW(p)

Does anyone know which arguments have been made about ETA of strong AI on the scale of "is it more likely to be 30, 100, or 300 years?"

comment by Kevin · 2010-03-28T11:15:25.997Z · LW(p) · GW(p)

Michael Arrington: "It’s time for a centralized, well organized place for anonymous mass defamation on the Internet. Scary? Yes. But it’s coming nonetheless."

http://techcrunch.com/2010/03/28/reputation-is-dead-its-time-to-overlook-our-indiscretions/

Replies from: Jack
comment by Jack · 2010-03-28T12:24:12.767Z · LW(p) · GW(p)

Meh. I think Arrington and this company are overestimating the market. JuicyCampus went out of business for a reason and they had the advantage of actually targeting existing social scenes instead of isolated individuals. Here is how my campus's juicy campus page looked over time (apologies for crudeness):

omg! we have a juicycampus page! Who is the hottest girl? (Some names) Who is the hottest guy? (Some names) Alex Soandso is a slut! Shut up Alex is awesome and you have no friends! Who are the biggest players? (Some names) Who had sex with a professor? (No names) Who is the hottest professor? (A few names) Who has the biggest penis?

... This lasted for about a week. The remaining six months until JuicyCampus shut down consisted of parodies about how awesome Steve Holt is and awkward threads obviously contrived by employees of Juicy Campus trying to get students to keep talking.

Because these things are uncensored the signal to noise ratio is just impossible to deal with. Plus for this to be effective you would have to be routinely looking up everyone you know. I guess you could have accounts that tracked everyone you knew... but are you really going to show up on a regular basis just to check? It does look like some of these gossip sites have been successful with high schools but those are far more insular and far more gossip-centered places than the rest of the world.

Replies from: Kevin
comment by Kevin · 2010-03-29T02:14:34.567Z · LW(p) · GW(p)

I'll be very surprised if this particular company is a success, but I don't think it's an impossible problem and I think there is probably some sort of a business execution/insight that could make such a company a very successful startup.

The successful versions of companies in this space will look a lot more like reputational economies and alternative currencies than marketplaces for anonymous libel like JuicyCampus.

comment by [deleted] · 2010-03-27T06:40:22.899Z · LW(p) · GW(p)

So, while in the shower, an idea for an FAI came into my head.

My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs. So, I figured that if you simply told two (or more) AGIs to fight over one database of information, the most rational AGI would be able to set the database to contain the correct information. (Another intuition of mine tells me that FAI is a problem of rationality: once you have a rational AGI, you can just feed it CEV or whatever.)

Of course, for this to work, two things would have to happen: one of the AGIs would have to be intelligent enough to discover the rational conclusions, and no AGI could be so much smarter than the others that it could find tons of evidence in favor of its pet truths and have the database favor them despite that they're false.

So, I don't think this will work very well. At least I came to it by despairing about how not everybody has an infinite amount of money and yet values it anyway, thereby making our economic system perfect!

Replies from: ata, bogus, None
comment by ata · 2010-03-27T07:17:46.396Z · LW(p) · GW(p)

My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs.

That... doesn't sound right at all. It does sound like how people intuitively think about proof/reasoning (even people smart enough to be thinking about things like, say, set theory, trying to overturn Cantor's diagonal argument with a counterexample without actually discovering a flaw in the theorem), and how we think about debates (the guy on the left half of the screen says something, the guy on the right says the opposite, and they go back and forth taking turns making Valid Points until the CNN anchor says "We'll have to leave it there" and the viewers are left agreeing with (1) whoever agreed with their existing beliefs, or, if neither, (2) whoever spoke last). But even if our current formal understanding of reasoning is incomplete, we know it's not going to resemble that. Yes, Bayesian updating will cause your probability estimates to fluctuate up and down a bit as you acquire more evidence, but the pieces of evidence aren't fighting each other, they're collaborating on determining what your map should look like and how confident you should be.

Of course, for this to work . . . no AGI could be so much smarter than the others that it could find tons of evidence in favor of its pet truths and have the database favor them despite that they're false.

Why would we build AGI to have "pet truths", to engage in rationalization rather than rationality, in the first place?

Replies from: None
comment by [deleted] · 2010-03-28T01:29:30.600Z · LW(p) · GW(p)

But even if our current formal understanding of reasoning is incomplete, we know it's not going to resemble that. Yes, Bayesian updating will cause your probability estimates to fluctuate up and down a bit as you acquire more evidence, but the pieces of evidence aren't fighting each other, they're collaborating on determining what your map should look like and how confident you should be.

Yeah. So if one guy presents only evidence in favor, and the other guy presents only evidence against, they're adversaries. One guy can state a theory, show that all existing evidence supports it, and thereby have "proved" it, and then the other guy can state an even better theory, also supported by all the evidence but simpler, thereby overturning that proof.

Why would we build AGI to have "pet truths", to engage in rationalization rather than rationality, in the first place?

We wouldn't do it on purpose!

comment by bogus · 2010-03-27T11:14:05.060Z · LW(p) · GW(p)

My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs.

Game semantics works somewhat like this; a proof is formalized as an "argument" between a Proponent and an Opponent. If an extension of game semantics to probabilistic reasoning exists, it will work much like the 'theory of uncertain arguments' you mention here.

comment by [deleted] · 2010-03-27T06:52:46.265Z · LW(p) · GW(p)

I seem to have man-with-a-hammer syndrome, and my hammer is economics. Luckily, I'm using economics as a tool for designing stuff, not for understanding stuff; there is no One True Design the way there's a One True Truth.

comment by Kevin · 2010-03-25T13:39:55.323Z · LW(p) · GW(p)

Could use some comment thread ringers here: http://news.ycombinator.com/item?id=1218075

comment by [deleted] · 2010-03-25T04:18:17.027Z · LW(p) · GW(p)

This is what non-reductionism looks like:

In a certain world, it's possible to build stuff. For example, you can build a ship. You build it out of some ingredients, such as wood, and by doing a bit of work. The thing is, though, there's only one general method that can possibly used to build a ship, and there are some things you can do that are useful only for building a ship. You have some freedom within this method: for example, you can give your ship 18 masts if you want to. However, the way you build the ship has literally nothing to do with the end result; whether you put 18 masts on a ship or none, it will end up with precisely the correct number. Any variation on the ship-building process gives you either exactly the same sort of ship or nothing at all.

If I lived in this world, I would conclude that a ship simply isn't made up of parts.

comment by Strange7 · 2010-03-23T01:04:30.766Z · LW(p) · GW(p)

Let's say Omega opens a consulting service, but, for whatever reason, has sharply limited bandwidth, and insists that the order in which questions are presented be determined by some sort of bidding process. What questions would you ask, and how much would you be willing to pay per byte for the combined question and response?

Replies from: Sly
comment by Sly · 2010-03-24T21:34:20.649Z · LW(p) · GW(p)

How many know about this, and are games such as the lottery, and sports betting still viable?

Lottery numbers / stock changes seem like the first impression answer to me.

Replies from: Strange7
comment by Strange7 · 2010-03-24T21:54:06.116Z · LW(p) · GW(p)

It's public knowledge. Omega is extraordinarily intelligent, but not actually omniscient, and 'I don't know' is a legitimate answer, so casinos, state lotteries, and so on would pay exorbitant amounts for a random-number generator that couldn't be cost-effectively predicted. Sports oddsmakers and derivative brokers, likewise, would take the possibility of Omega's advice into account.

comment by Strange7 · 2010-03-22T22:27:32.487Z · LW(p) · GW(p)

Fictional representation of an artificial intelligence which does not value self-preservation., and the logical consequences thereof.

comment by RobinZ · 2010-03-22T21:18:35.469Z · LW(p) · GW(p)

This will be completely familiar to most of us here, but "What Does a Robot Want?" seems to rederive a few of Eliezer's comments about FAI and UFAI in a very readable way - particularly those from Points of Departure. (Which, for some reason, doesn't seem to be included in any indexed sequence.)

The author mentions using these ideas in his novel, Free Radical - I can attest to this, having enjoyed it partly for that reason.

comment by Thomas · 2010-03-22T19:56:39.363Z · LW(p) · GW(p)

People gathering here, mostly assume that the evolution is slow and stupid, no match for intelligence at all. That the human, let alone superintelligence is for several orders of magnitude smarter than the process which created us in recent several billion years.

Well, despite many fancy mathematical theories of packing, some best results came from the so called digital evolution. Where the only knowledge is, that "the overlapping is bad and a smaller frame is good". Everything else is a random change and nonrandom selection.

Every previously intelligently developed solution, stupidly evolves fast from scratch here: http://critticall.com/SQU_cir.html

comment by Kevin · 2010-03-22T11:08:30.659Z · LW(p) · GW(p)

Does anyone have any spare money on In Trade? The new Osama Bin Laden contract is coming out and I would like to buy some. If anyone has some money on In Trade, I would pay a 10% premium.

Also, is there anyone here who thinks the In Trade Osama contracts are priced too highly? http://www.intrade.com/jsp/intrade/contractSearch/index.jsp?query=Osama+Bin+Laden+Conclusion

comment by Nisan · 2010-03-22T07:25:38.254Z · LW(p) · GW(p)

Here's a puzzle that involves time travel:

Suppose you have just built a machine that allows you to see one day into the future. Suppose also that you are firmly committed to realizing the particular future that the machine will show you. So if you see that the lights in your workshop are on tomorrow, you will make sure to leave them on; if they are off, you will make sure to leave them off. If you find the furniture rearranged, you will rearrange the furniture. If there is a cow in your workshop, you will spend the next 24 hours getting a cow into your workshop.

My question is this: What is your prior probability for any observation you can make with this machine? For example, what are the odds of the windows being open?

Replies from: Eliezer_Yudkowsky, Alicorn, wedrifid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-22T07:49:46.960Z · LW(p) · GW(p)

Can't answer until I know the laws of time travel.

No, seriously. Is the resulting universe randomly selected from all possible self-consistent ones? By what weighting? Does the resulting universe look like the result of iteration until a stable point is reached? And what about quantum branching?

Considering that all I know of causality and reality calls for non-circular causal graphs, I do feel a bit of justification in refusing to just hand out an answer.

Replies from: cousin_it, Nisan
comment by cousin_it · 2010-03-22T14:30:28.001Z · LW(p) · GW(p)

Can't answer until I know the laws of time travel.

Why is something like this an acceptable answer here, but not in Newcomb's Problem or Counterfactual Mugging?

Replies from: Vladimir_Nesov, Nick_Tarleton, Morendil
comment by Vladimir_Nesov · 2010-03-22T16:16:29.649Z · LW(p) · GW(p)

Because it's clear what the intended clarification of these experiments is, but less so for time travel. When the thought experiments are posed, the goal is not to find the answer to some question, but to understand the described situation, which might as well involve additionally specifying it.

comment by Nick_Tarleton · 2010-03-22T15:06:18.210Z · LW(p) · GW(p)

I can't imagine what you would want to know more about before giving an answer to Newcomb. Do you think Omega would have no choice but to use time travel?

Replies from: cousin_it
comment by cousin_it · 2010-03-22T16:02:46.618Z · LW(p) · GW(p)

No, but the mechanism Omega uses to predict my answer may be relevant to solving the problem. I have an old post about that. Also see the comment by Toby Ord there.

comment by Morendil · 2010-03-22T15:03:04.287Z · LW(p) · GW(p)

Because these don't involve time travel, but normal physics?

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-03-22T15:05:50.393Z · LW(p) · GW(p)

He did say "something like this", not "this".

comment by Nisan · 2010-03-22T22:33:52.817Z · LW(p) · GW(p)

I could tell you that time travel works by exploiting closed time-like curves in general relativity, and that quantum effects haven't been tested yet. But yes, that wouldn't be telling you how to handle probabilities.

So, it looks like this is a situation where the prior you were born with is as good as any other.

comment by Alicorn · 2010-03-22T07:52:16.185Z · LW(p) · GW(p)

Why am I firmly committed to realizing the future the machine shows? Do I believe that to be contrary would cause a paradox and explode the universe? Do I believe that I am destined to achieve whatever is foretold, and that it'll be more pleasant if I do it on purpose instead of forcing fate to jury-rig something at the last minute? Do I think that it is only good and right that I do those things which are depicted, because it shows the locally best of all possible worlds?

In other words, what do I hypothetically think would happen if I weren't fully committed to realizing the future shown?

Replies from: Sniffnoy, Nisan
comment by Sniffnoy · 2010-03-22T08:53:47.734Z · LW(p) · GW(p)

Agree with the question of why you would be doing this; sounds like optimizing on the wrong thing. Supposing that it showed me having won the lottery and having a cow in my workshop, it seems silly to suppose that bringing a cow into my workshop will help me win the lottery. We can't very well suppose that we were always wanting to have a cow in our workshop, else the vision of the future wouldn't affect anything.

comment by Nisan · 2010-03-22T22:05:36.192Z · LW(p) · GW(p)

I stipulated that you're committed to realizing the future because otherwise, the problem would be too easy.

I'm assuming that if you act contrary to what you see in the machine, fate will intervene. So if you're committed to being contrary, we know something is going to occur to frustrate your efforts. Most likely, some emergency is going to occur soon which will keep you away from your workshop for the next 24 hours. This knowledge alone is a prior for what the future will hold.

comment by wedrifid · 2010-03-22T08:14:44.968Z · LW(p) · GW(p)

My question is this: What is your prior probability for any observation you can make with this machine? For example, what are the odds of the windows being open?

Depends on the details of the counter-factual science. Does not depend on my firm commitment.

Replies from: Nisan
comment by Nisan · 2010-03-22T22:22:47.623Z · LW(p) · GW(p)

I was thinking of a closed time-like curve governed by general relativity, but I don't think that tells you anything. It should depend on your commitment, though.

comment by mattnewport · 2010-03-22T06:17:03.437Z · LW(p) · GW(p)

So healthcare passed. I guess that means the US goes bankrupt a bit sooner than I'd expected. Is that a good or a bad thing?

Replies from: Kevin
comment by Kevin · 2010-03-22T06:27:58.494Z · LW(p) · GW(p)

I think you're being overly dramatic.

Nate Silver has some good numerical analysis here: http://www.fivethirtyeight.com/2009/12/why-progressives-are-batshit-crazy-to.html

I don't think that US government debt has much connection to reality any more. The international macroeconomy wizards seem to make things work. Given their track record, I am confident that the financial wizards can continue to make a fundamentally unsustainable balance sheet sustainable, at least until the Singularity.

So I think that the marginal increase in debt from the bill is a smaller risk to the stability of the USA than maintaining the very flawed status quo of healthcare in the USA.

Useful question: When does the bill go into effect? My parent's insurance is kicking me off at the end of the month and it will be nice to be able to stay on it for a few more years.

comment by Kevin · 2010-03-19T08:27:12.187Z · LW(p) · GW(p)

Should you get a presidential physical?

http://www.cnn.com/2010/HEALTH/03/18/executive.physicals/index.html

comment by CannibalSmith · 2010-03-19T14:24:17.229Z · LW(p) · GW(p)

On a serious note, what is your (the reader's) favorite argument against a forum?

("I voted you down because this is not a meta thread." is also a valid response.)

Replies from: CannibalSmith, Larks
comment by CannibalSmith · 2010-03-19T14:28:51.221Z · LW(p) · GW(p)

The voting system is of utmost importance and I'd rather be inconvenienced by the current system than have a karma-free zone on this site.

comment by Larks · 2010-03-19T14:37:59.176Z · LW(p) · GW(p)

General risk-aversion; LW is a city on a hill, and the only one, so we should be very warey of fiddling unnecessarily.

Replies from: SilasBarta
comment by SilasBarta · 2010-03-19T14:51:12.341Z · LW(p) · GW(p)

Aha! The LW brand of conservatism!

Replies from: Larks
comment by Larks · 2010-03-19T15:06:57.210Z · LW(p) · GW(p)

Having said that, there are differences between my view and the one mattnewport mentioned. I don't necessarily believe the institutions exist for a good reason; it's not obviously related to the accumulated wisdom of ages or anything. Rather, out of all the internet sites out there, some were bound to turn out well.

However, given this, it makes more sense for those to maintain their current standards and for others to copy and implement their ideas, and then experiment, than for Atlas to experiment with better ways of holding up the world.

For the same reason we worry about existential risks to humanity, we should worry about existential risks to LW.

Of course, this argument is strong in proportion to how exceptional LW is, and weak in proportion to how close other sites are.

comment by roland · 2010-03-19T20:29:57.239Z · LW(p) · GW(p)

Very interested article about supressed science in physics: http://www.suppressedscience.net/physics.html

Among other things they discuss Einstein's relativity theory and cold fusion. Btw, I couldn't fail to notice that I see the very same tendency to suppress evidence that goes against certain established theories here on LW. 9/11 would be one case for certain.

Replies from: prase, orthonormal, Jack
comment by prase · 2010-03-22T10:33:41.762Z · LW(p) · GW(p)

I don't know enough about cold fusion, so I haven't read that part. As for relativity theory, it seems that the only valid argument against it presented in the linked article is, translated into more "lesswrongian" language, that doubting special relativity strongly endangers your status. Which is true, at least in general. Then they mention a "Modified Lorentz Aether Theory" without giving a single link to a place where this theory is explained, and support it by saying that the Michelson-Morley experiment conducted in 1887 in fact gave slightly different results than it is now believed it gave - as if these results were not anyway screened off by more recent experiments of the same kind.

The overall feeling I got from reading the passages is that the physical community is ready to censor any theory which doesn't precisely agree with special relativity. Which is a nonsense. First of all, general relativity itself extends special relativity and thus disagrees with it, and the differences between GR and SR are arguably bigger than between SR and Newton. Moreover, practically all physicists expect GR to break somewhere, at Planck scale at the latest.

Well, nobody really believes that the progress will take us back to æther. There is nothing strange with that. The authors of Suppressed Science have, as usual in similar debates, written about how the Newtonian physics was supposed to be absolutely correct until it turned out to be false, but they somehow forgot to notice that it wasn't replaced by "Modified Galilean Epicycle Theory" or something of that sort.

Or more generally, their style of writing ignores that most of the alleged crackpots really are crackpots.

comment by orthonormal · 2010-03-21T18:37:37.423Z · LW(p) · GW(p)

It's too bad that this will be downvoted into invisibility just because you didn't have the discipline to leave off the last sentence.

The 'suppressed science' link is worth reading IMO, even though I think it's highly probable that the experimental results therein are bunk; it's quite conceivable that the scientific establishment draws its "crank research" line too early in some modern cases, given its history.

Replies from: NancyLebovitz, Mitchell_Porter, roland, roland
comment by NancyLebovitz · 2010-03-22T12:38:28.993Z · LW(p) · GW(p)

Here's one case where I'd say the crank research filter is too sensitive-- natural vision improvement. I've had some experience of it working, but the usual response to the idea that anything other than surgery or lenses can improve eyesight in well-nourished people has been "it's all about the shape of the lenses of your eyes", and anything else is nonsense.

Since I knew you'd ask, my night vision has improved considerably. I've been too stubborn to wear glasses (details later if people are interested). When I moved to Philadelphia in 1995, I couldn't read the street signs at night. And when I say couldn't , I mean that I couldn't read them if I stood as close to the sign as possible and squinted. Now I can read them at a moderate distance (will check on just what, but a definite improvement).

I haven't been testing my vision-- it's mostly been more acuity checked by at what distance I can read street signs.

And it isn't just a shift of the clear visual range with time. (I'm 56.) I can still see the little Lincoln in Lincoln memorial on the back of a penny.

Now, it turns out that neuroplasticity applies to the visual system.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-03-22T20:56:48.519Z · LW(p) · GW(p)

What did you do to improve your vision?

comment by Mitchell_Porter · 2010-03-22T12:21:48.444Z · LW(p) · GW(p)

It's too bad that this will be downvoted into invisibility just because you didn't have the discipline to leave off the last sentence.

Actually, the article has about as much substance as a typical 9/11 conspiracy diatribe. That whole site is almost enough to make me abandon my own contrarian opinions, for fear of being just another fool.

comment by roland · 2010-03-22T18:34:38.595Z · LW(p) · GW(p)

This article might be of interest to you.

comment by roland · 2010-03-22T18:17:58.126Z · LW(p) · GW(p)

Feel free to post the link again leaving off the last sentence.

comment by Jack · 2010-03-24T05:16:33.622Z · LW(p) · GW(p)

Btw, I couldn't fail to notice that I see the very same tendency to suppress evidence that goes against certain established theories here on LW. 9/11 would be one case for certain.

Not to be rude but I couldn't fail to notice that you systematically overestimate claims against established theories and routinely fail to notice indicators of crank science, unprofessional-ism and mental instability in your sources.

Start here.

Replies from: roland
comment by roland · 2010-03-24T05:23:50.331Z · LW(p) · GW(p)

Could you provide a specific example to support your claim?

Replies from: Jack
comment by Jack · 2010-03-24T06:29:55.603Z · LW(p) · GW(p)

Infinite-energy.com. Linked from this comment. Their "Who We Are" page uses the word "new" twenty-eight times.

The faq includes this lovely bit:

Who is opposing New Energy science and technology? Only fools and small-minded people would oppose research on something so wonderful—even if there were only a 10% chance that it was correct (and the true percentage is far higher —100%, in our opinion). Sad to say, there are plenty of fools arrayed against New Energy. [...] There are plenty of science Ph.D.s and even Nobel laureates who have obscenely attacked cold fusion, vacuum energy, hydrino physics, and investigations into loopholes in the Second Law of Thermodynamics. Their credentials are worthless. What they have to say on the subject of New Energy usually amounts to no more than uninformed bigotry. These people apparently believe that science has come to an end—that the broad outlines of physics and biology, as described in current texts, are on absolutely secure grounds. One of the greatest buffoons in the sad array of enemies is Robert Park of the American Physical Society and the University of Maryland, whose “What’s New” electronic column gives weekly cues to an army of incompetent “science journalists,” who then misinform other journalists, the establishment’s so-called “scientists,” and cowering government bureaucrats and politicians.

Emphasis added. Like are you kidding us with this stuff? Scare-quotes, ad-homs, hyperbole, obvious distortions... Also, they think the chances all these theories are correct is 100%? ONE HUNDRED FUCKING PERCENT?

Same goes for the "suppressed science" article above. "A new Inquisition"?! Btw, see this review of the book where that quote comes from.

Going back to the 9/11 discussions you routinely cited this paragon of rationality and professionalism. The claims the site made were instantly rebutted with a cursory google search.

And this is coming from someone who thinks contemporary physics has some problems and who has had threads downvoted where I suggested Lorentzian ether theory hadn't been falsified (though I don't know if this was the cause of the downvotes. I still don't understand what I was doing wrong there). A few of these sources might even end up being right about things. But their tone, style and positions are strong evidence that they are cranks and what they say should be taken with the appropriate grain of salt.

Replies from: roland
comment by roland · 2010-03-24T15:52:02.783Z · LW(p) · GW(p)

Well,

maybe some sources look like crackpots, but that doesn't mean all their claims are false.

Going back to the 9/11 discussions you routinely cited this paragon of rationality and professionalism. The claims the site made were instantly rebutted with a cursory google search.

Well, I might have cited specific articles from that source, that doesn't mean that I agree with everything from it.

As for your mention of rebuttal referring to the passenger lists in 9/11, AFAIK there still is a lot of controversy about it. There were several lists published by different news outlets at different times. The way to resolve this issue would be to see the official list of the FAA but that one hasn't been made public.