Posts

Comments

Comment by BaconServ on Open thread, 7-14 July 2014 · 2014-07-09T18:43:11.784Z · LW · GW

Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?

If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.

Comment by BaconServ on Open thread, 7-14 July 2014 · 2014-07-09T18:32:40.227Z · LW · GW

Where to start depends highly on where you are now. Would you consider yourself socially average? Which culture are you from and what context/situation are you most immediately seeking to optimize? Is this for your occupation? Want more friends?

Comment by BaconServ on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-12T21:46:28.518Z · LW · GW

Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.

Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking for here isn't merely to be convinced, but to be happy with whatever decision you make. Ten months worth of payment for the relief of not having to pay an entirely useless cost every month, and whatever more immediate utility will accompany that "extra" 50$/month. If 50$ doesn't buy much immediate utility for you, then a compelling argument needs to encompass in-depth discussion of trivial things. It would mean having to know precise information about what you actually value. Or at the very least, an accurate heuristic about how you feel about trivial decisions. As it stands, you feel the 50$/month investment is worth it for a very narrow type of investment: Cryonics.

This is simply restating the knowns in a particular format, but it emphasizes what the core argument needs to be here: Either that the investment harbors even less utility than 50$/month can buy, or that there are clearly superior investments you can make at the same price.

Awareness of just how severely confirmation bias exists in the brain (despite any tactics you might suspect would uproot it) should readily show that convincing you that there are better investments to make (and therefore to stop making this particular investment) is the route most likely to produce payment. Of course, this undermines the nature of the challenge: A reason to not invest at all.

Comment by BaconServ on MIRI strategy · 2013-10-30T00:15:45.997Z · LW · GW

In other words, all AGI researchers are already well aware of this problem and take precautions according to their best understanding?

Comment by BaconServ on MIRI strategy · 2013-10-29T23:41:02.049Z · LW · GW

Is there something wrong with climate change in the world today? Yes, it's hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in "green" and alternative energy if not for the publicity surrounding climate change?

It's easy to look back after the fact and say, "The market handled it!" But the truth is that the publicity and the corresponding opinions of thousands of entrepreneurs is part of that market.

Looking at the two markets:

  1. MIRI's warning of uFAI is popularized.
  2. MIRI's warning of uFAI continues in obscurity.

The latter just seems a ton less likely to mitigate uFAI risks than the former.

Comment by BaconServ on MIRI strategy · 2013-10-29T23:21:02.450Z · LW · GW

It could be useful to attach a, "If you didn't like/agree with the contents of this pamphlet, please tell us why at," note to any given pamphlet.

Personally I'd find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.

Comment by BaconServ on MIRI strategy · 2013-10-29T01:08:11.434Z · LW · GW

That would probably upset many existing Christians. Clearly Jesus' second coming is in AI form.

Comment by BaconServ on MIRI strategy · 2013-10-29T00:53:46.134Z · LW · GW

How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?

NSA spying isn't a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it's just against NSA spying doesn't seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn't need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.

If you want to do something you can, earn to give and give money to MIRI.

That is not a valid path if MIRI is willfully ignoring valid solutions.

You don't get points for pressuring people to address arguments. That doesn't prevent an UFAI from killing you.

It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.

We probably don't have to solve it in the next 5 years.

Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI's warning isn't properly heeded?

Comment by BaconServ on MIRI strategy · 2013-10-28T21:47:51.225Z · LW · GW

Politics is the mindkiller.

Really, it's not. Tons of people discuss politics without getting their briefs in a knot about it. It's only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn't that common elseways. People, on large, are willing to seriously debate political issues. "Politics is the mind-killer" is a result of some pretty severe selection bias.

Even ignoring that, you've only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can't get the concept to penetrate the academia where AI is likely to be developed, we're not yet mitigating the threat. A thousand angry letters demanding this research, "Stop at once," or, "Address the issue of friendliness," isn't something that is easy to ignore—no matter how bad you think the arguments for uFAI are.

You're not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who've argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky's argument has gained widespread attention, but if it pressures them to properly address Yudkowsky's arguments, then it has legitimately helped.

Comment by BaconServ on MIRI strategy · 2013-10-28T20:53:46.012Z · LW · GW

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.

Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn't someone in the field be more aware of that and other potential dangers, despite the GE FUD they've no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people's misconceptions on the issue.

Your reasoning for why the "bad" publicity would have severe (or any notable) repercussions isn't apparent.

If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.

This just doesn't seem very realistic when you consider all the variables.

Comment by BaconServ on MIRI strategy · 2013-10-28T20:39:21.481Z · LW · GW

While not doubting the accuracy of the assertion, why precisely do you believe Kurzweil isn't taken seriously anymore, and in what specific ways is this a bad thing for him/his goals/the effect it has on society?

Comment by BaconServ on MIRI strategy · 2013-10-28T19:37:04.223Z · LW · GW

Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we'll only need to do once; after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.

Google probably already has an AI (and AI-risk) team internally that they've just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they'd make it known they were taking their own precautions.

Comment by BaconServ on MIRI strategy · 2013-10-28T19:22:59.776Z · LW · GW

Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.

There has got to be enough writing by now that an effective chain mail can be written.

ETA: The chain mail suggestion isn't knocked down in luke's comment. If it's not relevant or worthy of acknowledging, please explain why.

ETA2: As annoying as some chain mail might be, it does work because it does get around. It can be a very effective method of spreading an idea.

Comment by BaconServ on MIRI strategy · 2013-10-28T19:12:11.106Z · LW · GW

Is "bad publicity" worse than "good publicity" here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that's kind of the goal here.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-27T22:27:50.036Z · LW · GW

How specifically? Easy. Because LessWrong is highly dismissive, and because I've been heavily signalling that I don't have any actual arguments or criticisms. I do, obviously, but I've been signalling that that's just a bluff on my part, up to an including this sentence. Nobody's supposed to read this and think, "You know, he might actually have something that he's not sharing." Frankly, I'm surprised that with all the attention this article got that I haven't been downvoted a hell of a lot more. I'm not sure where I messed up that LessWrong isn't hammering me and is actually bothering to ask for specifics, but you're right; it doesn't fit the pattern I've seen prior to this thread.

I'm not yet sure where the limits of LessWrong's patience lie, but I've come too far to stop trying to figure that out now.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-27T22:14:57.385Z · LW · GW

People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.

Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we'd need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I'd love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I'm not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don't have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.

The human mind is finite, and there are infinitely many possible concepts.

I need backing on both of these points. As far as I know, there isn't enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don't actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don't know the maximum conceptual complexity of the human brain.

As far as there being infinitely many concepts, "flying car" isn't terribly more complicated than "car" and "flying." Even if something in the far future is given a name other than "car," we can still grasp the concept of "transportation device," paired with any number of accessory concepts like, "cup holder," "flies," "transforms," "teleports," and so on. Maybe it's closer to a "suit" than anything we would currently call a "car;" some sort of "jetpack" or other. I'd need an expansion on "concept" before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I'm aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make "infinite" simply be inapplicable/nonsensical.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-27T21:00:48.414Z · LW · GW

This doesn't actually counter my argument, for two main reasons:

  1. That wasn't my argument.
  2. That doesn't counter anything.

Please don't bother replying to me unless you're going to actually explain something. Anything else is disuseful and you know it. I want to know how you justify to yourself that LessWrong is anything but childish. If you're not willing to explain that, I'm not interested.

What, do you just ignore it?

Comment by BaconServ on Less Wrong’s political bias · 2013-10-27T20:13:30.088Z · LW · GW

What, and you just ignore it?

No, I suppose you'll need a fuller description to see why the similarity is relevant.

  1. LessWrong is sci-fi. Check what's popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
  2. These concepts straight out of sci-fi have next to zero basis. Who is to say there even are concepts that the human mind simply can't grasp? I can't visualize in n-dimensional space, but I can certainly understand the concept. Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be? Are generation ships feasible? Is there some way to warp space to go fast enough that you don't need an entire ecosystem on board? If complex information processing nanites aren't feasible, is reanimation? These concepts aren't new, they've been around for ages. It's Magic 2.0.
  3. If it's not about evidence, what is it about? I'm not denying any of these possibilities, but aside from being fun ideas, we are nowhere near close to proving them legitimate. It's not something people are believing in because "it only makes sense." It's fantasy at it's base, and if it turns out to be halfway possible, great. What if it doesn't? Is there going to be some point in the future where LessWrong lets go of these childish ideas of simulated worlds and supertechnological abilities? 100 years from now, if we don't have AI and utility fog, is LessWrong going to give up these ideas? No. Because that just means that we're closer to finally realizing the technology! Grow up already. This stuff isn't reasonable, it's just plausible, and our predictions are nothing more than mere predictions. LessWrong believes this stuff because LessWrong wants to believe in this stuff. At this moment in time, it is pure fiction.
  4. If it's not rationa—No, you've stopped following along by now. It's not enough to point out that the ideas are pure fiction that humanity has dreamed about for ages. I can't make an argument within the context that it's irrational because you've heard it all before. What, do you just ignore it? Do you have an actual counter-point? Do you just shrug it off because "it's obvious" and you don't like the implications?

Seriously. Grow up. If there's a reason for me to think LessWrong isn't filled with children who like to believe in Magic 2.0, I'm certainly not seeing it.

Comment by BaconServ on Methods of Introspection: Brainstorming and Discussion · 2013-10-27T19:13:52.145Z · LW · GW

That's true. The process does rely on finding a solution to the worst case scenario. If you're going to be crippled by fear or anxiety, probably a very bad practice to emulate.

Comment by BaconServ on Only You Can Prevent Your Mind From Getting Killed By Politics · 2013-10-27T07:52:29.609Z · LW · GW

Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.

I've voted on the article, I've read a few comment, cast a few votes, made a few replies myself. I'm precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.

Damn I hope nobody replies to my comments...

Comment by BaconServ on Only You Can Prevent Your Mind From Getting Killed By Politics · 2013-10-27T07:45:19.739Z · LW · GW

Thank you. I no longer suspect you of being mind-killed by "politics is the mind-killer." Retracted.

Maybe I'm being too hasty trying to pinpoint people being mind-killed here, but it's hard to ignore that it's happening. I think I probably need to take my own advice right about now if I'm trying to justify my jumping to conclusions with statements like, "It's hard to ignore that it's happening."

I was planning to make a top-level comment here to the effect of, "INB4obvious mind-kill," but I think I just realized why the thoughts that thought that up were flawed from a basic level. Still, I think someone should point out that the comments here are barely touching the content of this article, which is odd for LessWrong.

Comment by BaconServ on Only You Can Prevent Your Mind From Getting Killed By Politics · 2013-10-27T07:35:19.349Z · LW · GW

We can only go a step at a time. The other recent post about politics in Discussion was rife with obvious mind-kill. I'm seeing this thread filling up with it too. I'd advocate downvoting of obvious mind-kill, but it's probably not very obvious at all and would just result in mind-killed people voting politically without giving the slightest measure of useful feedback. I'm really at a loss for how to get over the mind-kill of politics and the highly paired autocontrarian mind-kill of "politics is the mind-killer" other than just telling people to shut the fuck up, stop reading comments, stop voting, go lie down, and shut the fuck up.

Comment by BaconServ on Only You Can Prevent Your Mind From Getting Killed By Politics · 2013-10-27T07:18:39.545Z · LW · GW

So because you already have the tool, nobody else needs to be told about it? I feel like I'm strawmanning here, but I'm not sure what your point is if not, "I didn't need to read this."

Comment by BaconServ on Only You Can Prevent Your Mind From Getting Killed By Politics · 2013-10-27T07:15:21.538Z · LW · GW

Do you have an actual complaint here or are you disagreeing for the sake of disagreeing

Because it sounds a damn lot like you're upset about something but know better than to say what you actually think, so you're opting to make sophomoric objections instead.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-27T06:30:34.610Z · LW · GW

I don't really care how special you think you are.

See, that's the kind of stance I can appreciate. Straight to the point without any wasted energy. That's not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?

...Or is the average voter simply not cognizant enough to realize this...?

Worst effect of having sub-zero karma? Having to wait ten minutes between comments.

Wow, what an interesting perspective. Never heard that before.

Not sure if sarcasm or...

Comment by BaconServ on Less Wrong’s political bias · 2013-10-27T03:01:44.963Z · LW · GW

Why are you convinced I haven't posted them explicitly? Or otherwise tested the reactions of LessWrongers to my ideas? Are you under the impression that they were going to be recognized as worth thinking about and that they would be brought to your personal attention?

Let's say I actually possess ideas with future light cones on order of strong AI. Do you earently expect me to honestly send that signal and bring a ton of attention to myself? In a world of fools that want nothing more than to believe in divinity? (Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, "Works in mysterious ways that we can't hope to fathom.")

I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I'm getting pretty damn close to jumping ship and watching the aftermath here as it is.

Comment by BaconServ on What should normal people do? · 2013-10-27T02:51:09.341Z · LW · GW

These ideas are trivial. When I say "accessible," I mean in terms of the people educated in the world of the past who systematically had their ideas shut down. Anyone who has been able to control their education from an early age is a member of the Singularity already; their genius—the genius that each person possesses—has simply yet to fully shatter the stale ideas of a generation or two of fools who thought they knew much about anything. You really don't need to waste your time trying to get them to recognize the immense quality of this old-world content to old-world rationalists.

I apologize that this will come across as an extraordinary claim, but I've already grown up in the Singularity and derived 99% of the compelling content of LessWrong—sequences and Yudkowsky's thoughts included—by the age of 20. I'm gonna get downvoted to hell saying this, but really I'm just letting you know this so you don't get confused by how amazing unrestricted human curiosity is. Basically, I'm only saying this because I want to see your reaction in ten years.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-27T02:37:41.742Z · LW · GW

Certainly; I wouldn't expect it to.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-27T02:20:38.457Z · LW · GW

Hah. I like and appreciate the clarity of options here. I'll attempt to explain.

A lot about social situations is something we're directly told: "Elbows off the table. Close your mouth when you chew. Burping is rude, other will become offended." Others are more biologically inherent; murder isn't likely to make you popular a party. (At least not the positive kind of popularity...) What we're discussing here lies somewhere between these two borders. We'll consider aversion to murderers to be the least biased, having very little bias to it and being more a rational reaction, and we'll consider asserted matters of "manners" to be maximally biased, having next to nothing to do with rationality and everything to do with believing whatever you're told.

It's a fuzzy subject without fully understanding psychology, but for the most part these decisions about social interaction are made consciously. In the presence of a biased individual, for whatever reason and whatever cause, if you challenge them on their strong opinions you're liable to start an argument. There are productive arguments and unproductive arguments alike, but if the dinner table is terribly quiet already and an argument breaks out between some two members, everyone else has the option of "politely" letting the argument run its course, or intervening to stop this silly discussion that everyone's heard time and time again and are tired of hearing. Knowing all to well how these kind of things start, proceed, and stop, the most polite thing you can do to not disrupt the pleasant atmosphere that everyone is pleased with is simply not to indulge the argument. Find another time, another place. Do it in private. Do whatever. Just not now at the dinner table, while everyone's trying to have a peaceful meal.

There's an intense meme among rationalists that whenever two rational agents disagree, they must perform a grand battle. This is just not true. There are many many opportunities in human interaction to solve the same problem. What you find is that people never work up the courage to do it ever, because of how "awkward" it would be, or any other number of excuses. "What if s/he rejects me? I'll be devastated!" Intelligent agents are something to be afraid of, especially when their reactions control your feelings.

The courtesy isn't so much for the opiner as it is for everyone else present. It is a result of bias, but not on the part of the people signaling silence; they're just trying to keep things pleasant and peaceful for everyone.

Of course my description here could be wrong, but it's not. The easy way to determine this is to ask each person in turn why they chose to be silent. Pretty much all of them are going to recite some subset of my assessment. Some people may have acquired that manner from being instructed to hold that manner, while others derived it from experience. The former case can be identified by naive confusion, "Mommy, why didn't anyone tell him he was being racist?" You'll understand when you're older because people periodically fail to recognize the usefulness of civility. You'll see it eventually, possibly coming from the people who were surrounded by mannerly people to the degree that they never were able to acquire the experience that got everyone else to adopt that manner. Even if it makes sense rationally, it could be the result of bias, but it can be hard to convince a child of complex things like that, so the bias doesn't play a role beyond that that person finding that the things they were told as a child that they distinctly remember never understanding growing up did actually make sense in reality.

You can't fault the child for being ignorant, but you can fault them for not recognizing the truth of mother's words when the situation comes up that's supposed to show them why the wisdom was correct. If they don't learn it from experience like everyone else does, something went wrong. Possibly they overcompensated when they rejected Christianity and thought that it was a total fluke that their parents were competent enough to take care of a child. All those things that didn't have to do with Christianity? Nope. Biased by Christianity. Out the window they go, along with the bathwater. When grandma says something racist and everyone goes silent, that is not tacit approval, that is polite disapproval. To not recognize something so obvious is going to be the result of some manner of cognitive bias, whether it's a mindset of being the victim, white knighting on Tumblr's behalf, an extreme bias against Christianity, etc.. Whatever it is that makes you think your position that contradicts the wisdom handed down and independently verified by generation after generation of highly intelligent agents capable of abstract reasoning is something that contradicts rationality.

Our ancestors didn't derive quantum mechanics, no. That doesn't make them unintelligent by any stretch of the imagination. When it came to interacting with other intelligent agents, we had intense pressure on us to optimize, and we did. Only now are we formally recognizing the principles that underlie deep generational wisdom.

So to answer concisely:

Barring that "treating silence as a way of expressing that the opiner deserves courtesy" is the result of bias, but that the bias originates in the opiner, not the analyzer of the silence, if we're speaking strictly about the analysis of silence in modern social settings...

Do you in fact believe that?

Yes.

Can you provide any justification for believing it?

I can cite a pretty large chunk of the history of civilized humanity, yes.

The confusion is arising from your misunderstanding that decision theory is embedded more deeply in our psychology than our conscious mind—primitive decision theory (everything we've formally derived about decision theory up to this point) is embedded in our evolutionary psychology. There's a ton more nuance to human interaction than social justice's founding premise of, "Words hurt!!! (What are sticks and stones?)"

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T22:59:51.562Z · LW · GW

More or less, yeah. The totaled deltas weren't of the necessary magnitude order in my approximation. It's not that many pages if you set the relevant preference to 25 per page and have iterated all the way back a couple times before.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T22:49:21.438Z · LW · GW

That's odd and catches me completely off guard. I wouldn't expect someone who seems to be deeply inside the hive to both cognize my stance as well as you have and be judging that my heretofore unstated arguments might be worth hearing. Your submission history reflects what I assume; that you are on the outer edges of the hive despite an apparently deep investment.

With the forewarning that my ideas may well be hard to rid yourself of and that you might lack the communicate skills to adequately convey the ideas to your peers, are you willing to accept the consequences of being rejected by the immune system? You're risking becoming a "carrier" of the ideas here.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T22:32:55.767Z · LW · GW

I'd need an expansion on "bias" to discuss this with any useful accuracy. Is ignorance a state of "bias" in the presence of abundant information to the contrary of the naive reasoning from ignorance? Please let me know if my stance becomes clearer when you mentally disambiguate "bias."

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T22:22:48.683Z · LW · GW

I iterated my entire comment history to find the source of an immediate -15 spike in karma; couldn't find anything. My main hypothesis was moderator reprimand until I put the pieces together on the cost of replying to downvoted comments. Further analysis today seems to confirm my suspicion. I'm unsure if the retroactive quality of it is immediate or on a timer but I don't see any reason it wouldn't be immediate. Feel free to test on me, I think the voting has stabilized.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T22:11:08.739Z · LW · GW

Everything being polite and rational is informational; the point is to demonstrate that those qualities are not evidence of the hive mind quality. Something else is, which I clearly identity. Incidentally, though I didn't realize it at the time, I wasn't actually advocating dismantling it, or that it was a bad thing to have at all.

I mean, it's not like none of us ever goes beyond the walls of LessWrong.

That's the perception that LessWrong would benefit from correcting; it is as if LessWrongers never go outside the walls of LessWrong. Obviously you physically do, but there are strict procedures and social processes in place that prevent planting outside seeds in the fertile soil within the walls. When you come inside the walls, you quarantine yourself to only those ideas which LessWrong already accepts as being discussable. The article you link is three years old; what has happened in the time? If it was so well-received, where are the results? There is learning happening that is advancing human rationality far more qualitatively than LessWrong will publicly acknowledge. It's in a stalemate with itself for accomplishing its own mission statement; a deadlock of ideas enforced by a self-reinforcing social dynamic against ideas that are too far outside the very narrow norm.

Insofar as LessWrong is a hive mind, that same mind is effectively afraid of thinking and doing everything it can to not do so.

Comment by BaconServ on Time-logging programs and/or spreadsheets · 2013-10-26T19:41:20.438Z · LW · GW

I see. I'll have to look into it some time.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T09:00:15.627Z · LW · GW

Actually I think I found out the cause: Commenting on comments below the display threshold costs five karma. I believe this might actually be retroactive so that downvoting a comment below display the display threshold takes five karma from each user possessing a comment under it.

Comment by BaconServ on Time-logging programs and/or spreadsheets · 2013-10-26T08:45:34.107Z · LW · GW

As a baseline, I need a program that will give me more information than simply being slightly more aware of my actions does. I want something that will give me surprising information I wouldn't have noticed otherwise. This is necessarily non-trivial, especially given my knack for metacognition.

Comment by BaconServ on Methods of Introspection: Brainstorming and Discussion · 2013-10-26T08:35:08.831Z · LW · GW

A habit I find my mind practicing incredibly often is simulation of the worst case scenario. Obviously the worst case scenario for any human interaction is that they will become murderously enraged and do everything in their power to destroy you. This is generally safe to dismiss as nonsense/completely paranoid. After numerous iterations of this, you start ignoring the unrealistic worst-possible scenarios (that often make so little sense there is nothing you can do to react to them) and get down to the realistic worst case scenario. Often times in my youth this meant thinking about the reaction to my saying exactly what I felt and thought. The reactions I predicted in response were inaccurate to the point of caricature, but I often found that, even in the wost case scenario that made half sense, there was still a path forward. It wasn't the end of the world or some irreversible horror that would scar me forever, it was just an event where emotions got heated. That's generally it. There's little way to create a lasting problem without planning to create such a thing.

Obviously this doesn't apply to supernatural actions on your part (creating strong AI is, in many ways, a supernatural scenario), but since those lie outside the realm of common logic, you have to handle them specially. Interestingly, when I was realistic about it, people didn't react too badly to when I thought about what would happen if I suddenly did some intensely supernatural event like telekinesis. Sure, it's surprising, and they'll want you to help them move, but there's nothing they can really do if you insist you want to keep it a secret. They pretty much have to respect your right to self-determination. Of course they could always go supervillain on you like in the comics, but that's not a terribly realistic worst-case scenario even if it were strictly possible.

Of course it sounds like meaningless fiction at that point, but it serves to illustrate just how bad the worst case scenario is; I've found it is very hard to pretend the worst case is immensely terrible when you think about it realistically.

Comment by BaconServ on Methods of Introspection: Brainstorming and Discussion · 2013-10-26T08:16:25.278Z · LW · GW

I've noticed that I crystallize discrete and effective sentences like that a lot in response to talking to others. Something about the unique way they need things phrased for them to understand well results in some compelling crystallized wisdoms that I simply would not have figured out nearly as precisely if I hadn't explained my thoughts to them.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T07:46:42.210Z · LW · GW

A lot can come across differently when you're trapped behind an inescapable cognitive bias.

ETA: I should probably be more clear about the main implication I intend here: Convincing yourself that you are the victim all the time isn't going to improve your situation in any way. I could make an argument that even the sympathy one might get out of such a method of thinking/acting is negatively useful, but that might be pressing the matter unfairly.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T07:31:49.070Z · LW · GW

Let me put it this way: The effect is big enough that I have no qualms calling it a blanket inability. This should be implied by the rules of common speech, but people who consider themselves intelligent find it easier to believe that such confidence is evidence of irrationality.

What's interesting is that you think such an article can actually be written. (Let's ignore that I earned sub-zero karma with my posts in this thread today.)

Consider the premise:

LessWrong doesn't think about what's written beyond what's written.

(Obviously there are a few stray thoughts that you'll find in the comments, but they are non-useful and do not generally proliferate into more descriptive articles.)

Let's make it clear the purpose of such an article would be to get LessWrong to think about what's written beyond what is written. This is necessary to make LessWrong useful beyond any other internet forum. Such an article would be advocating independent and bold thinking, and then voicing any compelling realizations back to LessWrong to spark further thought by others. A few short passes of this process and you could see some pretty impressive brainstorming—all while maintaining LessWrong's standards of rationality. Recall that thought being a very cheap and very effective resource is what makes machine intelligence so formidable. If the potential for communal superintelligence isn't sufficient payoff here, nothing will be.

Keep in mind that this is only possible insofar as a significant portion of LessWrong is willing to think beyond what is written.

If we suppose that this is actually possible; that superintelligent-quality payoffs are possible here with only slight optimization of LessWrong, then why isn't LessWrong already trying to do this? Why weren't they trying years ago? Why weren't they trying when That Alien Message was published? You might want to say that the supposing is what's causing the apparent question; that if LessWrong could really trivially evolve into such a mechanism, that it most definitely would be, and that the reason we don't see it doing this is because many consider this to be irrational and not worth trying for.

Okay.

Then what is the point of thinking beyond what's written?

If there aren't significant payoffs to self-realizations that increase rationality substantially, then what is the point? Why be LessWrong? Why bother coming here? Why bother putting in all this effort if you're only going to end up performing marginally better? I can already hear half the readers thinking, "But marginally better performance can have significant payoffs!" Great, then that supports my argument that LessWrong could benefit tremendously from very minor optimization towards thought sharing. But that's not what I was saying. I was saying, after all the payoffs are calculated, if they aren't going to have been any more than marginally better even with intense increases in rationality, then what is the point? Are we just here to advertise the uFAI pseudo-hypothesis? (Not being willing to conduct the experiment makes it an unscientific hypothesis, regardless of however reasonable it is to not conduct the experiment.) If so, we could do a lot better by leaving people irrational as they are and spreading classic FUD on the matter. Write a few compelling stories that freak everyone out—even intelligence people.

That's not what LessWrong is. Even if that was what Yudkowsky wanted out of it in the end, that's not what LessWrong is. If that were all LessWrong was, there wouldn't be nearly as many users as there are. I recall numerous times Yudkowsky himself stated that in order to make LessWrong grow, he would need to provide something legitimate beyond his own ulterior motives. By Yudkowsky's own assertion, LessWrong is more than FAI propaganda.

LessWrong is what it states on the front page. I am not here writing this for my own hubris. (The comments I write under that premise sound vastly different.) I am writing this for one sole single purpose. If I can demonstrate to you that such an article and criticism cannot currently be written, that there is no sequence of words that will provoke a thinking beyond what's written response in a significant portion of LessWrongers, then you will have to acknowledge that there is a significant resource here that remains significantly underutilized. If I can't make that argument, I have to keep trying with others, waiting for someone to recognize that there is no immediate path to a LessWrong awakening.

I've left holes in my argument. Mostly because I'm tired and want to go to bed, but there's nothing stopping me from simply not sending this and waiting until tomorrow. Sleepiness is not an excuse or a reason here. If I were more awake, I'd try writing a more optimum argument instead of stream-of-consciousness. But I don't need to. I'm not just writing this to convince you of an argument. I'm writing this as a test, to see if you can accept (purely on principle) that thought is inherently useful. I'm attempting to convince you not of my argument, but to use your own ability to reason to derive your own stance. I'm not asking you to agree and I'd prefer if you didn't. What I want is your thoughts on the matter. I don't want knee-jerk sophomoric rejections to obvious holes that have nothing to do with my core argument. I don't want to be told I haven't thought about this enough. I don't want to be told I need to demonstrate an actual method. I don't want you to repeat what all other LessWrongers have told me after they summarily failed to grasp the core of my argument. The holes I leave open are intentional. They are tripholes for sophomores. They are meant to weed out impatient fools, even if it means getting downvoted. It means wasting less of my time on people who are skilled at pretending they're actually listening to my argument.

LessWrong, in its current state, is beneath me. It performs marginally better than your average internet forum. There are non-average forums that perform significantly better than LessWrong in terms of not only advancing rationality, but just about everything. There is nothing that makes LessWrong special aside from the front page potential to form a community whose operations represent a superingellient process.

I've been slowly giving out slightly more detailed explanations of this here and there for the past month or so. I've left fewer holes here than anywhere else I've made similar arguments. I have put the idea so damn close to the finish line for you that for you to not spend two minutes reflecting on your own, past what's written here, indicates to me exactly how strong the cognitive biases are that prevent LessWrong from recursive self-improvement.

Even in the individuals who signal being the most open minded and willing to hear my argument.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T06:36:09.103Z · LW · GW

I'm not sure how to act on this information or the corresponding downvoting. Is there something I could have done to make it more interesting? I'd really appreciate knowing.

Comment by BaconServ on What should normal people do? · 2013-10-26T04:17:06.542Z · LW · GW

A good example would be any of the articles about identity.

It comes down to a question of what frequency of powerful realizations individual rationalists are having that make their way back to LessWrong. I'm estimating it's high, but I can easily re-assess my data under the assumption that I'm only seeing a small fraction of the realizations individual rationalists are having.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T03:38:53.678Z · LW · GW

Oh, troll is a very easy perception to overcome, especially in this context. Don't worry about how I'll be perceived beyond delayed participation in making posts. There is much utility in negative response. In a day I've lost a couple dozen karma, I've learned a lot about LessWrong's perception. I suspect there is a user or two participating in political voting against my comments, possibly in response to my referencing the concept in one of my comments. Something like a grudge is a thing I can utilize heavily.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T03:16:01.477Z · LW · GW

I don't consider the comment section useful or relevant in any way. I can see voting on articles being useful, with articles scoring high enough being shifted into discussion automatically. You could even have a second tier of voting for when a post has enough votes to pass the threshold into Main for the votes it gets once there.

The main problem with karma sorting is that the people that actually control things are the ones that read through all content, indiscriminately. Either all of LessWrong does this, making karma pointless, or a sufficiently dedicated agent could effectively control LessWrong's perception of how other rationalists feel.

I'm sorting by Controversial for this thread to see what LessWrong is actually split about.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T02:54:21.878Z · LW · GW

I'm really at a loss for reasons as to why this is being downvoted. Would anyone like to help me understand what's so off-putting here?

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T02:47:54.969Z · LW · GW

Guess not. Would have liked to see inferential silence avoided here.

Comment by BaconServ on What should normal people do? · 2013-10-26T02:44:04.064Z · LW · GW

My current heuristic is to take special note of the times LessWrong has a well-performing post identify one of the hundreds of point-biases I've formalized in my own independent analysis of every person and disagreement I've ever seen or imagined.

I'm sure there are better methods to measure that LessWrong can figure out for itself, but mine works pretty well for me.

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T02:36:01.373Z · LW · GW

If there are as many issues as you suggest, then we should start the discussion as soon as possible—so as to resolve it sooner. Can you imagine a LessWrong that can discuss literally any subject in a strictly rational matter and not have to worry about others getting upset or mind-killed by this or that sensitivity?

If I'm decoding your argument correctly, you're saying that there's no obviously good method to manage online debate?

Comment by BaconServ on Less Wrong’s political bias · 2013-10-26T02:31:26.748Z · LW · GW

I certainly hope not. If politics were less taboo on LessWrong, I would hope that mention of specific parties were still taboo. Politics without tribes seems a lot more useful to me than politics with tribes.