Posts

Some recent evidence against the Big Bang 2015-01-07T05:06:27.945Z

Comments

Comment by JStewart on Some recent evidence against the Big Bang · 2015-01-07T05:43:47.253Z · LW · GW

Is this not kosher? The minimum karma requirement seems like an anti-spam and anti-troll measure, with the unfortunate collateral damage of temporarily gating out some potential good content. The post seems clear to me as good content, and my suggestion to MazeHatter in the open thread that this deserved its own thread was upvoted.

If that doesn't justify skirting the rule, I can remove the post.

Comment by JStewart on Open thread Jan. 5-11, 2015 · 2015-01-07T05:09:23.590Z · LW · GW

I've posted it here.

Comment by JStewart on Open thread Jan. 5-11, 2015 · 2015-01-06T21:14:49.300Z · LW · GW

I think you should post this as its own thread in Discussion.

Comment by JStewart on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-26T20:58:55.753Z · LW · GW

This has been proposed before, and on LW is usually referred to as "Oracle AI". There's an entry for it on the LessWrong wiki, including some interesting links to various discussions of the idea. Eliezer has addressed it as well.

See also Tool AI, from the discussions between Holden Karnofsky and LW.

Comment by JStewart on 2014 Less Wrong Census/Survey · 2014-10-28T23:55:28.929Z · LW · GW

Count me surveyed.

Comment by JStewart on Rationality Quotes May 2013 · 2013-05-28T02:04:58.958Z · LW · GW

Interesting. I wonder to what extent this corrects for people's risk-aversion. Success is evidence against the riskiness of the action.

Comment by JStewart on Circular Preferences Don't Lead To Getting Money Pumped · 2012-09-11T05:25:24.717Z · LW · GW

Having circular preferences is incoherent, and being vulnerable to a money pump is a consequence of that.

I knew that if I had 0.95Y I would trade it for (0.95^2)Z, which I would trade for (0.95^3)X, then actually I'd be trading 1X for (0.95^3)X, which I'm obviously not going to do.

This means that you won't, in fact, trade your X for .95Y. That in turn means that you do not actually value X at .9Y, and so the initially stated exchange rates are meaningless (or rather, they don't reflect your true preferences).

Your strategy requires you to refuse all trades at exchange rates below the money-pumpable threshold, and you'll end up only making trades at exchange rates that are non-circular.

Comment by JStewart on The noncentral fallacy - the worst argument in the world? · 2012-08-28T12:12:12.687Z · LW · GW

Judging from the comments this is receiving on Hacker News, this post is a mindkiller. HN is an audience more friendly to LW ideas than most, so this is a bad sign. I liked it, but unfortunately it's probably unsuitable for general consumption.

I know we've debated the "no politics" norm on LW many times, but I think a distinction should be made when it comes to the target audience of a post. In posts aimed to make a contribution to "raising the sanity waterline", I think we're shooting ourselves in the foot by invoking politics.

Comment by JStewart on A Primer On Risks From AI · 2012-03-24T17:31:10.243Z · LW · GW

I like the combination of conciseness and thoroughness you've achieved with this.

There are a couple of specific parts I'll quibble about:

Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.

"The Automation of Science" section seems weaker to me than the others, perhaps even superfluous. I think the line I've quoted is the crux of the problem; I highly doubt that the development of AGI will be driven by any such motivations.

Will we be able to build an artificial general intelligence? Yes, sooner or later.

I assign a high probability to the proposition that we will be able to build AGI, but I think a straight "yes" is too strong here.

Comment by JStewart on A Primer On Risks From AI · 2012-03-24T17:19:58.441Z · LW · GW

Out of curiosity, what are your current thoughts on the arguments you've laid out here?

Comment by JStewart on What epistemic hygiene norms should there be? · 2012-03-22T01:28:22.143Z · LW · GW

I agree. I've noticed an especially strong tendency to premature generalization (including in myself) in response to people asking for advice. Tell people what your experiences were, not (just) the general conclusions you drew from them.

Comment by JStewart on A Problem About Bargaining and Logical Uncertainty · 2012-03-22T00:59:41.064Z · LW · GW

Is Omega even necessary to this problem?

I would consider transferring control to staply if and only if I were sure that staply would make the same decision were our positions reversed (in this way it's reminiscent of the prisoner's dilemma). If I were so convinced, then shouldn't I consider staply's argument even in a situation without Omega?

If staply is in fact using the same decision algorithms I am, then he shouldn't even have to voice the offer. I should arrive at the conclusion that he should control the universe as soon as I find out that it can produce more staples than paperclips, whether it's a revelation from Omega or the result of cosmological research.

My intuition rebels at this conclusion, but I think it's being misled by heuristics. A human could not convince me of this proposal, but that's because I can't know we share decision algorithms (i.e. that s/he would definitely do the same in my place).

This looks to me like a prisoner's dilemma problem where expected utility depends on a logical uncertainty. I think I would cooperate with prisoners who have different utility functions as long as they share my decision theory.

(Disclaimers: I have read most of the relevant LW posts on these topics, but have never jumped into discussion on them and claim no expertise. I would appreciate corrections if I misunderstand anything.)

Comment by JStewart on February 2012 Media Thread · 2012-02-07T12:41:43.702Z · LW · GW

I dunno, I think it is. It took me several hours of reflection to realize that it could be framed in those terms. The show didn't do any breaking.

Comment by JStewart on February 2012 Media Thread · 2012-02-07T12:39:33.846Z · LW · GW

Yes, thanks. I wanted to use strikethrough but a) I couldn't figure out how to do it in LW's markdown and b) it wouldn't work anyway if you copy/paste to rot13.com like I do.

Comment by JStewart on February 2012 Media Thread · 2012-02-06T15:14:01.231Z · LW · GW

I mostly agree with you. In particular I really liked that Znqbxn'f jvfu jrag fb sne nf gb erjevgr gur havirefr. Gur fbhepr bs ure rzbgvbaf orvat sbe gur zntvpny tveyf naq gur pehrygl bs gur onetnva gurl znqr, V jnf npghnyyl n yvggyr jbeevrq va gur yrnq-hc gb gur svanyr gung ure jvfu jbhyqa'g or zbzragbhf rabhtu.

Ng gur fnzr gvzr, gubhtu gur jvfu raqrq hc ovt rabhtu gb or n fngvfslvat raq, V guvax vg'f cerggl rnfl gb jbaqre jul fur pbhyqa'g tb shegure. Gur arj havirefr vf arneyl vqragvpny gb gur byq bar, evtug qbja gb vaqvivqhny crbcyr. Gur zntvpny tveyf ab ybatre unir gb or chg vagb fhpu n ubeevoyr ab-jva fvghngvba, ohg gurl fgvyy unir gb evfx gurve yvirf. Naq sbe gur abezny crbcyr, gurer'f ab creprcgvoyr punatr ng nyy.

Gur bayl ceboyrz V unq jvgu guvf vf Xlhorl'f pynvz gb gehr travr-yvxr cbjref. Ryvrmre jebgr n cbfg frireny lrnef ntb nobhg guvf: http://lesswrong.com/lw/ld/thehiddencomplexityofwishes/. Gehr travr cbjref znxr n fgbel vzcbffvoyr gb ernfba nobhg. Gur ceboyrz vf gung jvfuvat vf n irel vyy-qrsvarq vqrn. Jung vs bar jvfurq gb or vzzbegny? Gung jvfu vf irel pybfr gb Znzv'f, ohg fgevpgyl fhcrevbe. Ubj pbhyq gung jbex? Jbhyq gung fnir ure sebz gur sngr bs orpbzvat n jvgpu? Gur fgbel pna'g fheivir fbzrguvat yvxr gung. Vg'f whfg nf rnfl gb fvzcyl jvfu rirelbar vzzbegny vs gung jbexrq. Jbhyq Xlhorl tenag fbzrguvat yvxr n jvfu sbe vzzbegnyvgl va gur snfuvba bs Ryvrmre'f "hafnsr travr", ol fhoiregvat gur vagrag oruvaq gur jvfu? Sebz jung jr frr va gur fubj, nyzbfg pregnvayl abg.

V guvax vg dhvgr oryvrinoyr gung Znzv jbhyqa'g guvax bs guvf nf fur ynl qlvat. Creuncf gur fubeg-fvtugrqarff bs Xlbxb'f naq Ubzhen'f jvfurf pna or fbzrjung cynhfvoyl whfgvsvrq ol gurve genhzn naq gurve lbhgu. Ohg jvgu gurvef, naq rfcrpvnyyl jvgu Fnlnxn'f, V srry yvxr lbh unir ab pubvpr ohg gb vaibxr n ybg bs "orpnhfr gur jevgref fnvq fb". Fnlnxn naq Znqbxn fcrag dhvgr n ybg bs gvzr pbafvqrevat gur cebfcrpg bs orpbzvat znubh fubhwb naq guvaxvat nobhg jvfurf.

Gurer ner, bs pbhefr, yvzvgf orfvqrf gur tveyf' vzntvangvbaf. Gur jvfu zhfg nyfb pbzr sebz gurve fgebatrfg srryvatf, juvpu va gurbel urycf rkcynva gur fznyy fpnyr bs gur jvfurf gurl pubfr (V'z ybbxvat ng lbh, Fnlnxn). Naq juvyr vg'f gehr gurfr punenpgref nera'g ZbE!Uneel, gurer'f fbzr cerggl rtertvbhf snvyher gb fuhg hc naq zhygvcyl. (Guvf vf unys bs gur ernfba V chg n fznyy qvfpynvzre abg gb tb vagb guvf fubj rkcrpgvat n fgebat qbfr bs engvbanyvgl)

Nyfb, jr yrnea sebz Xlhorl gung abg rirelbar'f jvfurf ner perngrq rdhny; gur "fvmr" bs gur jvfu qrcraqf ba lbhe cbjre nf n zntvpny tvey. (Qvq Xlhorl gryy Fnlnxn guvf rkcyvpvgyl va gur pbairefngvba jurer ur erirnyf gung Znqbxn unf terng cbgragvny cbjre? V frrz gb erzrzore uvz fnlvat gura gung gur jvfu fur pbhyq znxr jbhyq or uhtr). Vg'f pyrneyl abg nf fvzcyr nf whfg "jvfu sbe nalguvat", gura. V guvax Xlhorl jbhyq unir gb whfg ershfr gb tenag fbzrguvat yvxr gur vzzbegnyvgl jvfu, be nalguvat ryfr gung jbhyq pyrneyl whfg oernx guvatf. Abg rabhtu cbjre.

Vg'f gur nobir genva bs gubhtug gung yrnqf zr gb pbapyhqr gung juvyr Xlhorl tyvoyl gryyf gur cbgragvny erpehvgf gung gurl pna jvfu sbe nalguvat ng nyy, ur'f bapr ntnva qrprvivat gurz. Vapvqragnyyl, n jvfu bs gur zntavghqr bs Znqbxn'f zhfg unir orra jryy orlbaq jung rira Xlhorl gubhtug jnf jvguva uvf cbjref. Vs gur Vaphongbef gubhtug gung erjevgvat gur ehyrf bs gur havirefr jrer cbffvoyr gura vg qbrfa'g znxr zhpu frafr gung gurl'q snvy gb gel gb qb guvf gurzfryirf. Rira zber rkcyvpvgyl, VVEP ur rkcerffrf dhvgr fbzr fhecevfr jura fur svanyyl znxrf ure pbagenpg.

V ubcr gur nobir nqrdhngryl rkcynvaf zl cevbe pbzzrag. Sebz urer ba ner zl gubhtugf ba gur raqvat, naq jul V ernyyl yvxrq vg rira gubhtu V guvax vg'f cerggl aba-engvbany (vgf aba-engvbanyvgl orvat gur bgure unys gur ernfba sbe zl qvfpynvzre).

Fb univat ernfbarq guhf sne, jr pbzr gb Znqbxn. Znqbxn, nybar nzbat nyy zntvpny tveyf, npghnyyl qbrf unir gur cbjre gb punatr guvatf ba n tenaq fpnyr. Fb ubj qbrf ure jvfu fgnpx hc? V unq gb cbaqre sbe n juvyr nsgre V svavfurq gur fubj va beqre gb qrpvqr ubj V sryg nobhg vg. Gehr, ure jvfu yvgrenyyl punatrq gur ehyrf bs gur havirefr. Ohg ba gur bgure unaq, pbafvqre gur ulcbgurgvpny jbeyq jurer Xlhorl vf abg n ylvat onfgneq^U^U^U^U^U^U pbzcrgrag tnzr gurbevfg naq gurer'f ab ubeevoyr sngr va fgber sbe gur tveyf. Bhe jbeyq, zber be yrff. Tvira n jvfu, gurer ner na njshy ybg bs bgure ubeevoyr guvatf gung pbhyq fgnaq gb punatr. Gur erfhyg bs Znqbxn'f jvfu bayl oevatf hf hc gb gur yriry bs gung jbeyq, pbzcyrgr jvgu nyy vgf qrngu naq zvfrel. Vf gurer ab jvfu gung pbhyq unir vzcebirq guvatf zber guna Znqbxn'f qvq? V frr ab ernfba gb oryvrir gung.

Ohg vf vg gbb unefu ba gur fubj gb gel gb ubyq vg gb fbzrguvat yvxr n genafuhznavfg fgnaqneq? Fher, V'q yvxr gb frr ure qb fbzrguvat ernyyl nzovgvbhf jvgu gur shgher bs uhznavgl, ohg V'z abg gur bar jevgvat vg. Whfg orpnhfr V'z crefbanyyl vagrerfgrq va genafuhznavfz rgp fubhyqa'g yrnq zr gb qvat gur fubj hasnveyl.

Lrg V srry yvxr gur fubj vgfrys qbrf yvir hc gb n uvture fgnaqneq sbe ernyvfz/engvbanyvgl guna gur infg znwbevgl bs navzr. Gur rcvfbqr jurer Xlhorl rkcynvaf gur uvfgbel bs gur Vaphongbef' pbagnpg jvgu uhznavgl ernyyl fgehpx zr. Vagryyvtrag nyvraf gung ner abg bayl vagryyvtrag, ohg npghnyyl vauhzna? Abg gur fbeg bs guvat lbh rkcrpg bhg bs znubh fubhwb. Guvf fubj unf zber fpv-sv pubcf guna zbfg bs gur navzr gung npghnyyl trg pynffvsvrq nf fpv-sv. Nqq gb gung gur trahvar rssbeg gb unir gur punenpgref naq gurve npgvbaf znxr frafr naq srry ernyvfgvp, naq gur (V guvax) boivbhf snpg gung gur jevgref xarj ubj qvssvphyg jvfuvat jbhyq or gb jbex vagb gur fgbel va n jnl gung znxrf frafr. Guvf vf gur fghss gung grzcgrq zr gb cbfg n erpbzzraqngvba sbe guvf fubj ba YJ.

Gur irel raq fbeg bs gbbx zr ol fhecevfr, va pbagenfg gb nyy guvf. Qhevat gur frdhrapr va juvpu Znqbxn "nfpraqf", V engure rkcrpgrq gur erfhyg gb ybbx zber... qvssrerag. Rira tenagvat gung ure jvfu jnf gur znkvzhz fur pbhyq unir qbar, V sbhaq vg fhecevfvat gung gur raq erfhyg jbhyq or gur fnzr jbeyq vafgrnq bs bar jvgu n irel qviretrag uvfgbel. Naq jurer qvq gurfr arj rarzvrf fhqqrayl fubj hc sebz, abj gung gurer ner ab jvgpurf?

Guvaxvat nobhg vg zber, V neevirq ng na vagrecergngvba gung ynvq gb erfg zl harnfr jvgu gur raqvat. Gur xrl gb vg jnf gur arj rarzvrf, jubfr pbairavrag nccrnenapr yrsg vagnpg rira gur arprffvgl sbe gur tveyf gb svtug. V guvax gung sebz gur ortvaavat gur jevgref jrer nvzvat sbe n fhogyr nffnhyg ba gur sbhegu jnyy.

Znqbxn'f jvfu jnf gb ghea ure fubj vagb n abezny znubh fubhwb.

Comment by JStewart on February 2012 Media Thread · 2012-02-06T01:26:16.070Z · LW · GW

I nearly posted exactly this earlier today.

It's an excellent show, though don't expect too much rationality. Madoka is no HP:MoR, but since there is very little rationality-relevant content in anime it does stand out.

For me it was a case of two independent interests unexpectedly having some crossover. As a fan of SHAFT (the animation studio) and mahou shoujo in general, it was a given I was going to watch Madoka. Then fhcreuhzna vagryyvtraprf naq vasbezngvba-nflzzrgevp artbgvngvba?

In a classic mahou shoujo setup like this, with magical powers and wish-granting etc, an obvious-to-LWers objection will be "Why doesn't someone just wish for no one to suffer/die ever again?". Certainly MoR!Harry would have handled this world a lot differently. I was expecting to just have to live with this oversight in an otherwise impressively coherent setting. But I think by the end of the show even that can be justified if you really want to, based on Xlhorl'f ercrngrq qrprcgvbaf (naq gurersber gur pbzcyrgr ynpx bs perqvovyvgl bs uvf pynvzf bs hayvzvgrq cbjre gb tenag jvfurf), gur rkgerzryl lbhat cebgntbavfgf, naq gur nccnerag "ehyr" gung gur zntavghqr bs gur jvfu qrcraqf ba gur qrcgu bs srryvat naq qrfver sbe vg. Madoka is not MoR!Harry, after all.

Comment by JStewart on Video Q&A with Singularity Institute Executive Director · 2011-12-11T04:52:16.964Z · LW · GW

As one of the 83.5%, I wish to point out that you're misinterpreting the results of the poll. The question was: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" This is not the same as "unfriendly AI is the most worrisome existential risk".

I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I thought decently likely to occur before 2100, though a >90% wipeout of humanity is still quite unlikely.

edit: I should note that I have read the sequences and it is because of Eliezer's writing that I think unfriendly AI is the most likely way for humanity to end.

Comment by JStewart on 2011 Less Wrong Census / Survey · 2011-11-03T03:29:28.616Z · LW · GW

I just took the survey. I was pretty sure I remembered the decade of Newton's book, but I was gambling on the century and I lost.

I think quibbles over definitions and wording of most of the probability questions would change my answers by up to a couple of orders of magnitude.

Lastly, I really wanted some way to specify that I thought several xrisks were much more likely than the rest (for example, [nuclear weapons, engineered pandemic] >> others).

Comment by JStewart on Rhetoric for the Good · 2011-10-26T23:15:13.821Z · LW · GW

My central objection is that this feels like a very un-LessWrongish way to approach a problem. A grab bag of unrelated and unsourced advice is what I might expect to see on the average blog.

Not only is there basically no analysis of what we're trying to do and why, but the advice is a mixed bag. If one entry on the list completely dominates most of the others in terms of effectiveness (and is a prerequisite to putting the others to good use), I don't expect it to be presented as just another member of the list. A few other entries on the list I consider to be questionable advice or based on mistaken assumptions.

Upon reread I fear this comes across as much harsher criticism than I intend it to be, because I really do think this is one of the most valuable skills to be cultivated. It's also a thorny problem that attracts a lot of bullshit, being particularly vulnerable to generalization from one example. I'm glad Lukeprog posted this.

Comment by JStewart on Rhetoric for the Good · 2011-10-25T05:56:35.539Z · LW · GW

Edit: Grouchymusicologist has already covered silly grammar-nazism, passives, and Strunk and White, complete with the Languagelog links I was looking for.

\25. Write like you talk. When possible, use small, old, Germanic words.

I think this one should be deleted. The first sentence of it is wrong as written, but the idea behind it is expressed clearly and sufficiently in #26 anyway. People do not talk in grammatical, complete sentences.

As for the second half, do you really look up etymologies as you write? I have only the vaguest sense of the origins of the vast majority of words in English, and this despite taking 5 years of French in school. This advice doesn't look like it was actually meant to be followed in any practical sense, and I would need some convincing that it's even a good idea.

\14. Almost always list things in threes, in ascending order of the word length of the list item.

This advice seems similarly quite arbitrary and unmotivated.

Comment by JStewart on Rhetoric for the Good · 2011-10-25T05:24:45.497Z · LW · GW

I agree with your conclusion (this is a worthwhile pursuit), but I have some qualms.

There are a couple of general points that I think really need to be addressed before most of the individual points on this list can be considered seriously:

  • Following a list of prescriptions and proscriptions is a really poor way to learn any complex skill. A bad writer who earnestly tries to follow all the advice on this list will almost certainly still be bad at writing. I think the absolute best, most important advice to give to an aspiring writer is to write. A lot.

  • What constitutes "good" writing is a matter of taste. As with other aesthetic endeavors, it's practically impossible to write tasteful prose if you don't have taste in reading prose. I don't see any real way to develop taste without reading a lot, and paying attention to what you're reading and how it's written. To some extent I think every person has to pick apart writing that they think is good and figure out for themselves the nuts and bolts of good writing. The resulting insights might be temptingly easy to distill into bullet points, but this is a very leaky process of abstraction. Most of the value of these insights isn't really communicated in the summary, but is in the data in your brain that made these patterns obvious to you. It's the A Monad is Like a Burrito problem.

  • Compounding the issue of taste, there's the problem that "good writing" is an underspecified term. There are a lot of extremely popular and wealthy authors whose writing isn't considered "good," at least by those who seem to have taste. Is popularity orthogonal to "good"? Should our goal even be "good," then? Or is maximal popularity not, in fact, our goal? I have no idea what the answers to these questions should be. Would I rather write like Nabokov than like Dan Brown? Yes. Would that be instrumentally useful in spreading my ideas (or ideas that I like) as widely as possible? I don't know. Possibly not.

I have a few comments about specific points on your list, but I'll split those into other comments.

Comment by JStewart on Welcome to Less Wrong! (2010-2011) · 2011-04-26T15:38:13.352Z · LW · GW

That was an awesome introduction post. I like the way you think.

Comment by JStewart on Q: Experiment on blaming the one you hurt? · 2011-03-29T02:44:44.275Z · LW · GW

Some googling lead me to the Wikipedia article on cognitive dissonance (this link should get you to the right spot on the page).

Wikipedia's citation for this is: Tavris, Carol; Elliot Aronson (2008). Mistakes were made (but not by me). Pinter and Martin. pp. 26–29. This book's first 55 pages are viewable on Google Books. I'll attempt to link directly to the relevant section here but it's an ugly URL so I'm not sure it'll work.

Citation 17 looks like just the thing you're looking for, but the viewable portion of their citations section cuts off just too early on both Amazon and Google Books. Thankfully some searching turns it up readily. I don't know one academic database from another, let alone which you might have access to, but here's a link to ScienceDirect. The paper is "The physiology of catharsis" by Michael Kahn.

The book's discussion implies that there is other work that supports this study ("The first experiment that demonstrated this actually came as a complete surprise to the investigator..."), but there are no other relevant citations in that section. Their second example, playground bullying, has rather than a supporting citation a quote from a Dostoevsky novel.

I don't have time to do further searching myself at the moment, but from their discussion I'd try investigating the term "dissonance theory" next.

Comment by JStewart on Aieee! The stupid! it burns! · 2010-12-04T18:44:58.971Z · LW · GW

The original was Eliezer himself, in How to Seem (and Be) Deep. I'm more fond of TheOtherDave's analogy, though, since I think the baseball bat analogy suffers from one weakness: you're drawing a metaphorical parallel in which death (which you disagree is bad) is replaced by something that's definitely bad. Sometimes you can't get any farther than this, since this sets off some people's BS detectors (and to be honest I think the heuristic they're using to call foul on this is a decent one).

Even if you can get them to consider the true payload of the argument (that clearly bad but inevitable things will probably be rationalized, and therefore that we should expect death to have some positive-sounding rationalizations even if it were A Very Bad Thing), you still haven't really got a complete argument. That baseless rationalizations might be expected to crop up justifying inevitabilities does not prove that your conversation partner's justifications are baseless, it only provides an alternate explanation for the evidence.

It isn't actually hard to flesh this line of thought into a more compelling argument, but I think the accidental long-life thought experiment hits much harder.

Edit: Upon rereading, I had forgotten that Eliezer's version ends with a line that includes the thrust of the TheOtherDave's argument: "I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no."

Comment by JStewart on "Behind the Power Curve" by Simon Funk · 2010-12-04T18:26:57.773Z · LW · GW

I don't mean to rain on cousin_it's parade at all here, but I have to put in an additional plug for "After Life." Even if you didn't really find the blog post especially interesting, if you have any affinity for science fiction I really think "After Life" is worth a look. I recommend it with no reservations.

It's short, it's free, and it's the best exploration I've seen of some very LessWrong ideas. The premise of the story is based on recursive self-modification and uploading, and it's entertaining as well as interesting.

Comment by JStewart on Ask and Guess · 2010-12-02T00:37:53.508Z · LW · GW

(There actually was a method for getting it, but it was an Advanced Guess Culture technique, not readily taught in one session.)

I'd love an explanation of the technique.

Comment by JStewart on Hi - I'm new here - some questions · 2010-11-14T05:10:03.331Z · LW · GW

Hello, and welcome.

You are correct in your observation that this section does not have a high rate of new posts. I'm not sure, but I think you are likely correct in your guess that a flood of new posts would not be appreciated. LessWrong doesn't have a very traditional forum structure, and I'm not sure that a place exists on this site yet that quite fits your posting style. I'm commenting here in part because that puts you in the same boat as me - my first comment on this site was the opinion that the avenues of participation in LW don't seem to fit how I like to express myself, and that probably other potential users were in the same situation. I think LW doesn't lend itself to conversation or stepwise refinement of ideas by a group, which is my best guess for how I would like to really engage with the ideas discussed here. That said, the site is changing and is itself open source, so this problem is tractable.

As to the more personal parts of your introduction, I think you sound like a great person to have a conversation with. I expect you may just find some people here who have enough in common with your informational omnivorousness and desire to think and make sense of things, and that this community will accept you and benefit from your input. The only point of criticism I (hesitantly) will offer is that the following excerpt is a bit worrisome:

I also have a massive number of posts on the Internet, although many of them are beyond embarrassing. In the end, though, I only look for people who are open to anything and completely non-judgmental (although some people may look for certain "signals" when they're looking for prospective contacts, to minimize the chances of meeting a contact with which one may fear wasting time on). Basically, my ideal model (for hypothesis generation) involves this: I try to type out some hypotheses, and then post them online, in hopes that someone might critique them. Many of my hypotheses will be junk, but that's okay. As long as I can maximize the number of useful ideas that I can generate, I think I'll have done something.

My only concern is that while your goal is good, the methods perhaps leave something to be desired. It may well be the fastest way for you to learn, but putting the burden of critique of a large flow of ideas onto others can be something of an imposition. Time certainly is a valuable resource, as you state, and remember that other people value their time too. What I hope LW can do for you is read and critique just as you wish, but that also you learn here some habits and skills of thinking that allow you to do more and more of this sort of critique of your own ideas as you have them. My time at LW (and OB, and many other places on the net) has been spent largely lurking, in a project of refining my own ability to reason and critique effectively and correctly, and I hope that it works out for you that way too.

Comment by JStewart on If a tree falls on Sleeping Beauty... · 2010-11-12T07:35:31.784Z · LW · GW

I was going more for the point that some ambiguous questions about probabilities are disguised less-ambiguous questions about payoffs

To provide my feedback data point in the opposite direction, I found this to be well-expressed in the OP.

Comment by JStewart on Harry Potter and the Methods of Rationality discussion thread, part 4 · 2010-10-22T00:49:43.672Z · LW · GW

I have not read the original Harry Potter series. I first learned that Quirrell was Voldemort when, after finishing the 49 chapters of MoR out at that point, I followed a link from LW to the collected author's notes and read those.

I think that for those who have not read the source material (though there may not be many of us), it is basically impossible to intuit that Quirrell is Voldemort from the body of the fanfic so far.

That said, I don't feel like I missed out in any way and don't see why it necessarily needs to be any more explicit until the inevitable big reveal.

Edit: I just remembered that, as you can see, my prior comment on this post was written after I read chapter 49 but before I learned that Quirrell == Voldemort.

Comment by JStewart on Harry Potter and the Methods of Rationality discussion thread, part 4 · 2010-10-10T04:46:04.000Z · LW · GW

I have a question about chapter 49 and was wondering if anyone else had a similar reaction. Assuming Quirrell is not lying/wrong, and Voldemort did kill Slytherin's Monster, then my first thought was how unlikely that Slytherin's Monster should have even survived long enough to make it to 1943. No prior Heir of Slytherin had had the same idea? Perhaps no prior Heir of Slytherin had been strong enough to defeat Slytherin's Monster? No prior Heir had been ruthless enough?

Maybe this constitutes weak evidence for the theory that Quirrell is lying.

Comment by JStewart on Open Thread June 2010, Part 2 · 2010-06-08T03:41:19.865Z · LW · GW

Isn't your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.

Comment by JStewart on Attention Lurkers: Please say hi · 2010-04-16T23:01:45.542Z · LW · GW

Reddit-style posting is basically the same format as comment threads here, it's just a little easier to see the threading. One thing that feels awkward using threaded comments is conversation, and people's attempts to converse in comment threads is probably part of why comment threads balloon to the size they do. That's one area that chat/IRC can fill in well.

Another issue is that top-level posts have a feeling of permanence to them. It's like publishing something. I'd rather start with an idea and be able to discuss it and shape it. Top-level posts seem like they should have been able to be exposed to feedback before being judged ready to publish. I'm not really sure what kind of structure would work for this, but if I did, I probably would have jumped into an open thread or a meta thread before now :)

Comment by JStewart on Attention Lurkers: Please say hi · 2010-04-16T21:20:56.682Z · LW · GW

Hi.

edit: to add some potentially useful information, I think the biggest reason I haven't participated is that I feel uncomfortable with the existing ways of contributing (solely, as I understand it, top-level posts and comments on those posts). I know there has been discussion on LW before on potentially adding forums, chat, or other methods of conversing. Consider me a data point in favor of opening up more channels of communication. In my case I really think having a LW IRC would help.