Posts

Farewell Aaron Swartz (1986-2013) 2013-01-12T10:09:23.640Z
Implications of an infinite versus a finite universe 2012-12-21T17:12:59.979Z
Musk, Mars and x-risk 2012-11-23T20:32:53.247Z
[LINK] TEDx "When creative machines overtake man" 2012-11-06T18:03:01.837Z
[LINK] Different heuristics at different decision speeds 2012-10-04T06:34:33.842Z
[LINK] Cryonics - without even trying 2012-08-17T08:41:27.895Z

Comments

Comment by Kawoomba on New program can beat Alpha Go, didn't need input from human games · 2017-10-22T09:15:10.463Z · LW · GW

... and there is only one choice I'd expect them to make, in other words, no actual decision at all.

Comment by Kawoomba on How I'd Introduce LessWrong to an Outsider · 2017-05-03T20:42:41.939Z · LW · GW

If this were my introduction to LW, I'd snort and go away. Or maybe stop to troll for a bit -- this intro is soooo easy to make fun of.

Well, glad you didn't choose the first option, then.

Comment by Kawoomba on CFAR’s new focus, and AI Safety · 2016-12-09T17:05:40.673Z · LW · GW

The catch-22 I would expect with CFAR's efforts is that anyone buying their services is already demonstrating a willingness to actually improve his/her rationality/epistemology, and is looking for effective tools to do so.

The bottleneck, however, is probably not the unavailability of such tools, but rather the introspectivity (or lack thereof) that results in a desire to actually pursue change, rather than simply virtue-signal the typical "I always try to learn from my mistakes and improve my thinking".

The latter mindset is the one most urgently needing actual improvements, but its bearers won't flock to CFAR unless it has gained acceptance as an institution with which you can virtue-signal (which can confer status). While some universities manage to walk that line (providing status affirmation while actually conferring knowledge), CFAR's mode of operation would optimally entail "virtue-signalling ML students in on one side", "rationality-improved ML students out on the other side", which is a hard sell, since signalling an improvement in rationality will always be cheaper than the real thing (as it is quite non-obvious to tell the difference for the uninitiated).

What remains is helping those who have already taken that most important step of effective self-reflection and are looking for further improvement. A laudable service to the community, but probably far from changing general attitudes in the field.

Taking off the black hat, I don't have a solution to this perceived conundrum.

Comment by Kawoomba on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-12T08:10:26.209Z · LW · GW

Climate change, while potentially catastrophic, is not an x-risk. Nuclear war is only an x-risk for a subset of scenarios.

Comment by Kawoomba on Gratitude Thread :-) · 2016-04-19T20:24:45.812Z · LW · GW

The scarier thought is how often we're manipulated that way when people don't bungle their jobs. The few heuristics we use to identify such mischief are trivially misled (for example, establishing plausibility by posting on inconsequential other topics (at least on LW that incurs a measurable cognitive footprint, which is however not the case on, say, Reddit), and then there's always Poe's law to consider). Shills man, shills everywhere!

As they dictum goes, just cuz you're paranoid ...

Reminds me of Ernest Hemingway's apparent paranoid delusions of being under FBI surveillance ... only eventually it turned out he actually was. Well, at least if my family keep playing their roles well enough, from a functional blackbox perspective the distinction may not matter that much anyways. I wonder how they got the children to be such good actors, though. Mind chip implants?

As an aside, it's kind of curious that Prof. Tsipursky does his, let's say "social engineering", under his real name.

Anyways, good entertainment. Though on this forum, it's more of a guilty pleasure (drama is but a weed in our garth of rationality).

Comment by Kawoomba on The Thyroid Madness : Core Argument, Evidence, Probabilities and Predictions · 2016-03-14T19:54:24.322Z · LW · GW

Disclaimer: Only spent 20 minutes on this, so it might be incomplete, or you may already have addressed some of the following points:

At first glance, John Lowe authored 2 pubmed-listed papers on the topic.

The first of which in an open journal with no peer review (Med. Hypotheses) which has also published stuff on e.g. AIDS denialism. From his paper: "We propose that molecular biological methods can provide confirmatory or contradictory evidence of a genetic basis of euthyroid FS [Fibromyalgia Syndrome]." That's it. Proposing a hypothesis, not providing experimental evidence, paper ends.

The second paper was published in a somewhat controversial low impact journal (at least peer-reviewed). However, this apparently one and only peer reviewed and published paper actually contradicts the expected results, Lowe pulls off a somewhat convoluted move to save his hypothesis:

"TSH, FT3, or FT4 did not correlate with RMR [Resting Metabolic Rate] values. For two reasons, however, ITHR [Inadequate Thyroid Hormone Regulation] cannot be ruled out as the mechanism of FM [Fibromyalgia] patients’ lower RMRs: (1) TSH, FT3 , and FT4 levels have not been shown to reliably correlate with RMR values, and (2) these tests evaluate only pituitary-thyroid axis function and cannot rule out central HO and PRTH."

Yea ...

In addition, lots of crank signs: Lowe's review from 2008, along with his other writings, is "published" in a made-up "journal" which still lists him (from beyond the grave, apparently) as the editor-in-chief.

No peer review, pretending to be an actual journal, a plethora of commercial sites citing him and his research ... honi soit qui mal y pense!

Comment by Kawoomba on Open Thread March 7 - March 13, 2016 · 2016-03-09T10:09:06.307Z · LW · GW

I wonder if / how that win will affect estimates on the advent of AGI within the AI community.

Comment by Kawoomba on The Fable of the Burning Branch · 2016-02-11T14:57:45.428Z · LW · GW

You got me there!

Comment by Kawoomba on Rationality Merchandise - First Set · 2015-11-15T20:56:01.741Z · LW · GW

Please don't spam the same comment to different threads.

Comment by Kawoomba on How could one (and should one) convert someone from pseudoscience? · 2015-10-05T16:04:21.703Z · LW · GW

Hey! Hey. He. Careful there, a propos word inflation. It strikes with a force of no more than one thousand atom bombs.

Are you really arguing for keeping ideologically incorrect people barefoot and pregnant, lest they harm themselves with any tools they might acquire?

Sounds as good a reason as any!

maybe we should shut down LW

I'm not sure how much it counts, but I bet Chief Ramsay would've shut it down long ago. Betting is good, I've learned.

Comment by Kawoomba on Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI · 2015-10-04T17:18:59.057Z · LW · GW

As seen in the first episode series Caprica, quoth Zoe Graystone:

"(...) the information being held in our heads is available in other databases. People leave more than footprints as they travel through life; medical scans, dna profiles, psych evaluations, school records, emails, recording, video, audio, cat scans, genetic typing, synaptic records, security cameras, test results, shopping records, talent shows, ball games, traffic tickets, restaurant bills, phone records, music lists, movie tickets, tv shows... even prescriptions for birth control."

I, for one, think that the meme-mix defining our identity in itself could capture (predict) our behavior in large parts, foregoing biographical minutiae. Bonesaw in Worm didn't need precise memories to recreate the Slaughterhouse Nine clones.

Many think we can zoom out from atoms to a connectome, why not zoom out from a connectome to the memes it implements?

Comment by Kawoomba on Rationality Quotes Thread September 2015 · 2015-10-04T08:55:29.275Z · LW · GW

"Mind" is a high level concept, on a base level it is just a subset of specific physical structures. The precise arrangement of water molecules in a waterfall, over time, matches if not dwarves the KC of a mind.

That is, if you wanted to recreate precisely this or that waterfall as it precisely happened (with the orientation of each water molecule preserved with high fidelity), the strict computational complexity would be way higher than for a comparatively more ordered and static mind.

The data doesn't care what importance you ascribe to it. It's not as if, say, "power", automatically comes with "hard to describe computationally". On the contrary, allowing for a function to do arbitrary code changes is easier to implement that defining precise power limitations (see constraining an AI's utility function).

Then there's the sheer number of mind-phenomena, are you suggesting adding one by necessity increases complexity? In fact, removing one can increase it as well: If I were to describe a reality in which ceteris is paribus, with the exception of your mind not actually being a mind, then by removing a mind I would have increased overall complexity. Not even taking into account that there are plenty of mind-templates around already (implicitly, since KC, even though uncomputable, is optimal), and that for complexity considerations, adding another of a template isn't even adding much, necessarily (I'm aware that adding just a few bits already comes with a steep penalty, this comment isn't meant to be exhaustive). See also the alphabet example further on.

Then there's the illusion that somehow our universe is of low complexity just because the physical laws governing the transition between time-steps are simple. That is mistaken. If we just look at the laws, and start with a big bang that is not precisely informationally described, we get a multiverse host of possible universes with our universe not in the beginning, which goes counter the KC demands. You may say "I don't care, as long as our universe is somewhere in the output, that's fine". But then I propose an even simpler theory of everything: Output a long enough sequence of Pi, and you eventually get our universe somewhere down the line as well. So our universe's actual complexity is enourmous, down to atoms in a stone on a hill on some moon somewhere in the next galaxy. There exists a clear trade-off between explanatory power and conciseness. I used to link an old Hutter lecture on that latter topic a few years ago, I can dig it out if you'd like. (ETA: See for example the paragraph labeled "A" on page 6 in this paper of his).

The old argument that |"universe + mind"| > |"universe"| is simplistic and ill-applied. Unlike with probabilities, the sequence ABCDABCDABCDABCD can be less complex than ABCDABCDABCDABC.

The list goes on, if you want to focus on some aspect of it we can go into greater depth on that. Bottom line is, if there's a slam dunk case, I don't see it.

Comment by Kawoomba on Rationality Quotes Thread September 2015 · 2015-10-03T18:26:57.395Z · LW · GW

LessWrong has now descended to actually arguing over the Kolmogorov complexity of the Christian God, as if this was a serious question.

Well, there is a lot of motivated cognition on that topic (relevant disclaimer, I'm an atheist in the conventional sense of the word) and it seems deceptively straight forward to answer (mostly by KC-dabblers), but it is in fact anything but. The non-triviality arises from technical considerations, not some philosophical obscurantism.

This may be the wrong comment chain to get into it, and your grandstanding doesn't exactly signal an immediate willingness to engage in medias res, so I won't elaborate for the moment (unless you want me to).

Comment by Kawoomba on The Infinity Project · 2015-09-28T20:16:48.972Z · LW · GW

If you're looking for gullible recruits, you've come to the wrong place.

Don't lease the Ferrari just yet.

Comment by Kawoomba on Why Don't Rationalists Win? · 2015-09-12T20:22:30.098Z · LW · GW

What are you talking about?

Comment by Kawoomba on Open Thread August 31 - September 6 · 2015-09-11T20:58:46.785Z · LW · GW

History can be all things to all people, like the shape of a cloud it's a canvas on which one can project nearly any narrative one fancies.

Comment by Kawoomba on Why Don't Rationalists Win? · 2015-09-11T20:56:48.898Z · LW · GW

Their approach reduces to an anti-epistemic affect-heuristic, using the ugh-field they self-generate in a reverse affective death spiral (loosely based on our memeplex) as a semantic stopsign, when in fact the Kolmogorov distance to bridge the terminological inferential gap is but an epsilon.

Comment by Kawoomba on You Are A Brain - Intro to LW/Rationality Concepts [Video & Slides] · 2015-08-16T08:01:09.474Z · LW · GW

Good content, however I'd have preferred "You Are A Mind" or similar. You are an emergent system centered on the brain and influences upon it, or somesuch. It's just that "brain" has come to refer to 2 distinct entities -- the anatomical brain, and then the physical system generating your self. The two are not identical.

Comment by Kawoomba on Open Thread February 25 - March 3 · 2015-08-16T07:54:12.614Z · LW · GW

Well, I must say my comment's belligerence-to-subject-matter ratio is lower than yours. "Stamped out"? Such martial language, I can barely focus on the informational content.

The infantile nature of my name calling actually makes it easier to take the holier-than-thou position (which my interlocutor did, incidentally). There's a counter-intuitive psychological layer to it which actually encourages dissent, and with it increases engagement on the subject matter (your own comment nonwithstanding). With certain individuals at least, which I (correctly) deemed to be the case in the original instance.

In any case, comments on tone alone would be more welcome if accompanied with more remarks on the subject matter itself. Lastly, this was my first comment in over 2 months, so thanks for bringing me out of the woodwork!

I do wish that people were more immune to the allure of drama, lest we all end up like The Donald.

Comment by Kawoomba on Taking Effective Altruism Seriously · 2015-06-06T17:56:31.038Z · LW · GW

Certainly, within what's Good (tm) and Acceptable (tm), funding better education in the third world is the most effective method.

However, if you go far enough outside the Overton window, you don't need credibility, as long as the power asymmetry is big enough. You want food? It only comes with a chemical agent which sterilizes you, similar to Golden Rice. You don't need to accept it, you're free to starve. The failures of colonialism as well as the most recent forays into the middle east stem from the constraints of also having to placate the court of public opinion.

Regardless of this one example, are you taking the position of "the most effective methods are those within the Overton window"? That would be typical, but the actual question would be: Is it because changing the Overton window to include more radical options is too hard, or is it because those more radical options wouldn't feel good?

Comment by Kawoomba on Taking Effective Altruism Seriously · 2015-06-06T16:54:30.072Z · LW · GW

I too have the impression that for the most part the scope of the "effective" in EA refers to "... within the Overton window". There's the occasional stray 'radical solution', but usually not much beyond "let's judge which of these existing charities (all of which are perfectly societally acceptable) are the most effective".

Now there are two broad categories to explain that:

a) Effective altruists want immediate or at least intermediate results / being associated with "crazy" initiatives could mean collateral damage to their efforts / changing the Overton window to accommodate actually effective methods would be too daunting a task / "let's be realistic", etc.

b) Effective altruists don't want to upset their own System 1 sensibilities, their altruistic efforts would lose some of the fuzzies driving them if they needed to justify "mass sterilisation of third world countries" to themselves.

Solutions to optimization problems tend to set to extreme values all those variables which aren't explicitly constrained. The question then is which ideals we're willing to sacrifice in order to achieve our primary goals.

As an example, would we really rather have people decide just how many children they want to to create, only to see those children perish in the resulting population explosion? Will we influence that decisions only based on "provide better education, then hope for the best", in effect preferring starving families with the choice to procreate whenever to non-starving families without said choice?

I do believe it would be disastrous for EA as a movement to be associated with ideas too far outside the Overton window, and that is a tragedy, because it massively restricts EA's maximum effectiveness.

Comment by Kawoomba on Taking the reins at MIRI · 2015-06-01T20:58:58.459Z · LW · GW

MIRI continues to be in good hands!

Comment by Kawoomba on We Should Introduce Ourselves Differently · 2015-05-19T16:49:58.914Z · LW · GW

I'm not sure LW is a good entry point for people who are turned away by a few technical terms. Responding to unfamiliar scientific concepts with an immediate surge of curiosity is probably a trait I share with the majority of LW'ers. While it's not strictly a prequisite for learning rationality, it certainly is for starting in medias res.

The current approach is a good selector for dividing the chaff (well educated because that's what was expected, but no true intellectual curiosity) from the wheat (whom Deleuze would call thinkers-qua-thinkers).

HPMOR instead, maybe?

Comment by Kawoomba on Open Thread, May 11 - May 17, 2015 · 2015-05-11T16:39:15.390Z · LW · GW

That's a good argument if you were to construct the world from first principles. You wouldn't get the current world order, certainly. But just as arguments against, say, nation-states, or multi-national corporations, or what have you, do little do dissuade believers, the same applies to let-the-natural-order-of-things-proceed advocates. Inertia is what it's all about. The normative power of the present state, if you will. Never mind that "natural" includes antibiotics, but not gene modification.

This may seem self-evident, but what I'm pointing out is that by saying "consider this world: would you still think the same way in that world?" you'd be skipping the actual step of difficulty: overcoming said inertia, leaving the cozy home of our local minimum.

Comment by Kawoomba on Is Scott Alexander bad at math? · 2015-05-06T22:24:46.661Z · LW · GW

Disclosing one's sexual orientation won't be (mis)construed as a status grab in the same way as disclosing one's (real or imagined) intellectual superiority. Perceived arguments from authority must be handled with supreme care, otherwise they invariably set the stage for a primate hierarchy contest. Minute details in phrasing can make all the difference: "I could engage with people much smarter than you, yet I choose to help you, since you probably need my help and my advice" versus "I made the following experiences, hopefully someone [impersonal, not triggering status comparisons] can benefit from them". sigh, hoo-mans ... I could laugh at them all day if I wasn't one of them.

I'm happy to read your posts, but then I may be less picky about my cognitive diet than others. I mean, the alternative would be watching Hell's Kitchen. You do beat Gordon Ramsay on the relevant metrics, by a large amount.

Then again, maybe I'm just a bit jealous of your idealism.

Comment by Kawoomba on Is Scott Alexander bad at math? · 2015-05-05T18:17:10.341Z · LW · GW

I dislike the trend to cuddlify everything, to make approving noises no matter what, then framing criticisms as merely some avenue for potential further advances, or somesuch.

On the one hand, I do recognize that works better for the social animals that we are. On the other hand, aren't we (mostly) adults here, do we really need our hand held constantly? It's similar to the constant stream of "I LOVE YOU SO MUCH" in everday interactions, it's a race to the bottom in terms of deteriorating signal/noise ratios. How are we supposed to convey actual approval, shout it from the rooftops? Until that is the new de facto standard of neutral acknowledgment?

A Fisherian runaway, in which a simple truth is disregarded: When "You did a really good job with that, it was very well said, and I thank you for your interest" is a mandatory preamble to most any feedback, it loses all informational content. A neutral element of speech. I do wish for a reset towards more sensible (= information-driven) communication. Less social-affirmation posturing.

But, given the sensitive nature of topics here, this may be the wrong avenue to effect such a reset, invoking Crocker's Rules or no. Actually skipping the empty phraseology should be one of the later biases to overcome.

Comment by Kawoomba on Is Scott Alexander bad at math? · 2015-05-05T18:07:44.912Z · LW · GW

I don't think that it carves reality at its joints to call that "mathematical ability."

... and we're down to definitional quibbles, which are rarely worth the effort, other than simply stating "I define x as such and such, in contrast to your defining x as such as such". Reality has no intrinsic, objective dictionary with an entry for "mathematical ability", so such discussions can mostly be solved by using some derivative terms x1 and x2, instead of an overloaded concept x.

Of course, the discussion often reduces to who has the primacy on the original wording of x, which is why I'd suggest that neither get it / taboo x.

I agree that a more complex, nuanced framework would better correspond to different aspects of cognitive processing, but then that's the case for most subject matters. Bonus for not being as generally demotivating as "you lack that general quality called math ability", malus points because of a complexity penalty.

Comment by Kawoomba on [link] The surprising downsides of being clever · 2015-04-20T23:20:37.445Z · LW · GW

Teaching happiness can be -- and often is -- at odds with teaching epistemic rationality.

Comment by Kawoomba on Open thread, Mar. 9 - Mar. 15, 2015 · 2015-03-10T14:23:14.446Z · LW · GW

Just being a good germ vector.

Comment by Kawoomba on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T15:27:28.210Z · LW · GW

I amended the grandparent. Suppose for the sake of argument you agreed with my estimate of this being the proverbial "last, best hope". Then giving away the one potentially game-changing advantage to barter for a globally insignificant "victory" would be the epitome of an overly greedy algorithm. Losing sight of the actual goal because an authority figure told you so, in a way not thinking for yourself beyond the task as stated.

Making that point sounds, on reflection, like exactly the type of thing I'd expect Eliezer to do. Do what I mean, not as I say.

as opposed to, say, Voldemort killing himself immediately and fighting me within the horcrux system

Ocupado. Assuming it was not, even Voldemort would have some sort of reaction latency to such an outside context problem. Assuming he reacted instantly, sounds like better chances than buying a few days of unconsciousness still.

Comment by Kawoomba on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T15:00:02.765Z · LW · GW

Personally, I feel that case 1 ("doesn't work at all") is much more probable

I've come to the opposite conclusion. Should we drag out quotes to compare evidence? Is your estimate predicated on just one or two strong arguments, and if so could I bother you to state them? The most probability mass to my estimate is contributed by Voldemort's former reluctance to test the horcrux system and his prior blind spots as a rationalist when designing the system, and the oft-reinforced notion of Harry actually being a version of Tom Riddle, indistinguishable to a 'powerful' magical artifact (the Map), acting as an adult as an 11-years-old, "Riddles and Anwers", the FF.net title, etc.

Speaking up prolongs Harry's life until Voldemort does an experimental test.

The actual challenge may be to notice that the challenge isn't well-posed, that the binary variable to be optimized ("live, if only a little longer") is but a greedy solution probably suboptimal to reaching the actual goal. Transcend the teacher's challenge, solve the actual problem, you know?

Speaking up gives up an easy win

Kind of important. Winning the test, losing the war.

3) Horcrux hijacking works, and there's no workaround. It doesn't matter if Harry speaks up or not.

I disagree, it matters: Voldemort goes back to the mirror, freezes Harry in time. Keeps him unconscious through his death eaters. He outclasses everyone else who's left by orders of magnitude higher than he does Harry, from what we've seen. There are plenty of ways to simply cryonically freeze Harry then keep him on Death Eater guard until he made sure he closed the loopholes. Consider that he only learned he could test the system without danger to himself by using others as a proxy "test units" a few hours prior to current events.

PS: There's, incidentally, as zen-like beauty to the solution: In order to survive, all you need to do is die.

Comment by Kawoomba on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T14:50:27.487Z · LW · GW

"No action" is an action, same as any other (for a grokkable reference, see consequentialists and the Trolley experiment). Also, obviously it wouldn't be "no action" it would be selling Voldemort the idea that there's nothing left, maybe revealing the secret deemed most insignificant and then begging for that to apply to both parents.

Comment by Kawoomba on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T14:24:55.167Z · LW · GW

1) (Harry tells Voldemort his death could hijack the horcrux network) doesn't seem unlikely at all. Both hints from within the story (the Marauder map) and on the meta level ("Riddles and Answers") suggest an unprecedent congruence of identity, at least in the sense of magical artifacts (the map) being unable to tell the difference.

I did not post it since strictly speaking Harry should keep quiet about it.Losing the challenge of not dying (learned to lose), but increasing his chances of winning the war. Immediately even: Since the new horcrux system enables ghost travel, Harry could just try and overwrite / take possession of Voldemort body. Either it works and he wins, or it doesn't and the magic resonance kills ... well, kills only Voldemort, since Harry at that point would be the undead spirit.

That solution occurred to me as I was reading the challenge, and I was puzzled that on my (admittedly cursory) reading of a bunch of solutions, I did not find any exactly resembling it. Either the approach is deeply flawed and I don't see it, or everyone else is taking this as literary as I did and holding off on proposing it (since it may not be precisely the teacher's password as worded in the challenge), or something else.

Comment by Kawoomba on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T08:57:07.200Z · LW · GW

Skimming over (part of) the proposed solutions on FF.net has thoroughly destroyed any sense of kinship with the larger HPMoR readership. Darn, it was such a fuzzily warm illusion. In concordance with Yvain's latest blog post, there may be few if any general shortcuts to raising the sanity waterline.

Comment by Kawoomba on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T16:34:03.264Z · LW · GW

Harry's commitment is quite weaksauce, and it was surprising that he wasn't called on it:

I sshall help you obtain the Sstone (...) sshall not do anything I think will annoy you to no good end. Sshall call no help if I expect them to be killed by you or for hosstagess to die.

So he's free to call help as long as he expects to win the ensuing encounter. After which he could hand the Stone to a subdued Quirrell for just a moment, technically fulfilling that clause as well. Also, the "to no good end" qualifier? "Winning against Voldemort" certainly would count as a good end, cancelling that part as well.

Comment by Kawoomba on An alarming fact about the anti-aging community · 2015-02-17T06:15:38.928Z · LW · GW

Well, depends on how much you discount the expected utility of cryonics due to Pascal's Wager concerns. The variance of the payoff for freezing tissue certainly is much smaller, and freezing tissue really isn't a big deal from a technical or even societal point of view, as evidenced by, say, female egg freezing for fertility preservation.

Comment by Kawoomba on The Truth About Mathematical Ability · 2015-02-14T12:35:32.122Z · LW · GW

The (?) proves you right about the philosophy part.

Comment by Kawoomba on Sayeth the Girl · 2015-02-01T11:43:40.921Z · LW · GW

Seems like there's some feminists or some 6'5 men with a superiority complex around.

Well, I am 6'7, without a superiority complex of course. That's not why I downvoted you, though, and since you asked for an explanation:

I'm reading the comments and looking for some new ammo for my next fight in the gender wars.

That's not the kind of approach (arguments as soldiers) we're looking for in a rationality forum. One of the prequisites is a willingness to change your mind, which seems to be setting the bar too high for some people.

Comment by Kawoomba on Rationality Quotes January 2015 · 2015-01-29T10:59:20.027Z · LW · GW

I'd call it an a-rationality quote, in the sense that it's just an observation; one backed up by evidence but with no immediate relevancy to the topic of rationality.

On second thought, it does show a kind of bias, namely the "compete-for-limited-resources" evolutionary imperative which introduced the "bias" of treating most social phenomena as zero-sum games. Bias in quotes because there is no correct baseline to compare against, tendency would probably be a better term.

Comment by Kawoomba on Open thread, Jan. 26 - Feb. 1, 2015 · 2015-01-28T20:41:30.510Z · LW · GW

Strong statement from Bill Gates on machine superintelligence as an x-risk, on today's Reddit AMA:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Comment by Kawoomba on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. · 2015-01-28T08:10:34.418Z · LW · GW

The whole AI box experiment is a fun pastime, and educational in so far as learning to take artificial intellects seriously, but as real-world long-term "solutions" go, it is utterly useless. Like trying to contain nuclear weapons indefinitely, except you can build one just by having the blueprints and a couple leased hours on a supercomputer, no limited natural elements necessary, and having one means you win at whatever you desire (or that's what you'd think). All the while under the increasing pressure of improving technology, ever lowering the threshold to catastrophe. When have humans abstained from playing with the biggest fire they can find?

The best case scenario for AI boxing would be that people aware of the risks (unlikely because of motivated cognition) are the first to create an AGI (not just stumbling upon one, either) and use their first-mover advantage to box the AI just long enough (just having a few months would be lucky) to poke and prod it until they're satisfied it's mostly safe ("mostly" because whatever predicates the code fulfills, there remains the fundamental epsilon of insecurity of whether the map actually reflects the territory).

There are so many state actors, so many irresponsible parties involved in our sociological ecosystem, with so many chances of taking a wrong step, so many biological imperatives counter to success*, that (coming full circle to my very first comment on LW years ago) the whole endeavor seems like a fool's hope, and that only works out in Lord of the Rings.

But, as the sentient goo transforms us into beautiful paperclips, it's nice to know that at least you tried. And just maybe we get lucky enough that the whole take-off is just slow enough, or wonky enough, for the safe design insights to matter in some meaningful sense, after all.

* E.g. one AGI researcher defecting with the design to another group (which is also claiming to have a secure AI box / some other solution) would be a billionaire for the rest of his life, that being measured in weeks most likely. Such an easy lie to make to yourself. And that isn't if a relevant government agency doesn't even have to ask to get your designs, if anyone of reputation tipped them off, or they followed the relevant conferences (nooo, would they do that?).

Comment by Kawoomba on Rationality Quotes January 2015 · 2015-01-28T07:34:07.806Z · LW · GW

The quote was dashed by the poster.

Comment by Kawoomba on What are the resolution limits of medical imaging? · 2015-01-26T16:35:20.795Z · LW · GW

Contains a lot of guesstimates though, which it freely admits throughout the text (in the abstract, not so much). It's a bit like a very tentative Fermi estimate.

Comment by Kawoomba on Rationality Quotes January 2015 · 2015-01-24T18:33:14.454Z · LW · GW

I truly am torn on the matter. LW has caused a good amount of self-modification away from that position, not in the sense of diminishing the arguments' credence, but in the sense of "so what, that's not the belief I want to hold" (which, while generally quite dangerous, may be necessary with a few select "holy belief cows")*.

That personal information notwithstanding, I don't think we should only present arguments supporting positions we are convinced of. That -- given a somewhat homogeneous group composition -- would amount to an echo chamber, and in any case knock out Aumann's agreement theorem.

* Ironic, is it not? Analogous to "shut up and do the impossible" a case of instrumental versus epistemic rationality.

Comment by Kawoomba on Rationality Quotes January 2015 · 2015-01-23T19:59:31.805Z · LW · GW

There are analogues of the classic biases in our own utility functions, it is a blind spot to hold our preferences as we perceive them to be sacrosanct. Just as we can be mistaken about the correct solution to Monty Hall, so can we be mistaken about our own values. It's a treasure trove for rational self-analysis.

We have an easy enough time of figuring out how a religious belief is blatantly ridiculous because we find some claim it makes that's contrary to the evidence. But say someone takes out all such obviously false claims, or take a patriot, someone who professes to just deeply care about his country, or his bat mitzvah, or her white wedding, or what have you. Even then, there is more cognitive exploration to be had there than just shrugging and saying "can't argue with his/her utility function".

The quote does some work in that direction. From a certain point of view, altruism is the last, most persistent bias. Far from "there is a light in the world, and we are it" -- rather the final glowing ember on the bonfire of irrationality. But that's a long post in and of itself. Shrug, if you don't see it as a rationality quote, just downvote it.

Comment by Kawoomba on Rationality Quotes January 2015 · 2015-01-23T19:26:13.981Z · LW · GW

Suppose now that there were two such magic [invisibility] rings [of Gyges], and the just put on one of them and the unjust the other; no man can be imagined to be of such an iron nature that he would stand fast in justice. No man would keep his hands off what was not his own when he could safely take what he liked out of the market, or go into houses and lie with any one at his pleasure, or kill or release from prison whom he would, and in all respects be like a god among men.

Then the actions of the just would be as the actions of the unjust; they would both come at last to the same point. And this we may truly affirm to be a great proof that a man is just, not willingly or because he thinks that justice is any good to him individually, but of necessity, for wherever any one thinks that he can safely be unjust, there he is unjust.

For all men believe in their hearts that injustice is far more profitable to the individual than justice, and he who argues as I have been supposing, will say that they are right. If you could imagine any one obtaining this power of becoming invisible, and never doing any wrong or touching what was another's, he would be thought by the lookers-on to be a most wretched idiot, although they would praise him to one another's faces, and keep up appearances with one another from a fear that they too might suffer injustice.

Glaucon, in Plato's Republic

Comment by Kawoomba on I'm the new moderator · 2015-01-15T07:33:22.797Z · LW · GW

Is it ok to call people poopy-heads, but in a mature and intelligent manner?

Signs and portents ...

Comment by Kawoomba on How subjective is attractiveness? · 2015-01-14T19:56:33.693Z · LW · GW

Don't knock it 'til you try it.

Comment by Kawoomba on Rationality Quotes January 2015 · 2015-01-12T15:17:54.404Z · LW · GW

Truth had never been a priority. If believing a lie kept the genes proliferating, the system would believe that lie with all its heart.

Peter Watts, Echopraxia, on altruism. Well ok, I admit, not on altruism per se.

Comment by Kawoomba on MIRI's technical research agenda · 2015-01-11T07:08:43.342Z · LW · GW

It is simply unfathomable to me how you come to the logical conclusion that an UFAI will automatically and instantly and undetectably work to bypass and subvert its operators. Maybe that’s true of a hypothetical unbounded universal inference engine, like AIXI. But real AIs behave in ways quite different from that extreme, alien hypothetical intelligence.

Well, it follows pretty straightforwardly from point 6 ("AIs will want to acquire resources and use them efficiently") of Omohundro's The Basic AI Drives, given that the AI would prefer to act in a way conducive to securing human cooperation. We'd probably agree that such goal-camouflage would be what an AI would attempt above a certain intelligence-threshold. The difference seems to be that you say that threshold is so high as to practically only apply to "hypothetical unbounded universal inference engines", not "real AIs". Of course, your "undetectably" requirement does a lot of work in raising the required threshold, though "likely not to be detected in practice" translates to something different than, say, "assured undetectability".

The softer the take-off (plus, the lower the initial starting point in terms of intelligence), the more likely your interpretation would pan out. The harder the take-off (plus, the higher the initial starting point in terms of intelligence), the more likely So8res' predicted AI behavior would be to occur. Take-off scenarios aren't mutually exclusive. On the contrary, the probable temporal precedence of the advent of slow-take-off AI with rather predictable behavior could lull us into a sense of security, not expecting its slightly more intelligent cousin, taking off just hard enough, and/or unsupervised enough, that it learns to lie to us (and since we'd group it with the reference class of CogSci-like docile AI, staying undetected may not be as hard as it would have been for the first AI).

So which is it?

Both, considering the task sure seems hard from a human vantage point, and by definition will seem easy from a sufficiently intelligent agent's.