Comment by tenlier on The Universal Medical Journal Article Error · 2013-04-08T19:36:00.450Z · LW · GW

Gwern, I should be able to say that I appreciate the time you took to respond (which is snarky enough), but I am not able to do so. You can't trust that your response to me is inappropriate and I can't find any reason to invest myself in proving your response is inappropriate. I'll agree my comment to you was somewhat inappropriate and while turnabout is fair play (and first provocation warrants an added response), it is not helpful here (whether deliberate or not). Separate from that, I disagree with you (your response is,historically, how people have managed to be wrong a lot). I'll retire once more.

I believe it was suggested to me when I first asked the potential value of this place that they could help me with my math.

Comment by tenlier on The Universal Medical Journal Article Error · 2013-04-08T16:14:10.674Z · LW · GW

Sorry, Gwern, I may be slandering you, but I thought I noticed it long before that (I've been reading, despite my silence). Another thing I have accused you of, in my head, is a failure to appropriately apply a multiple test correction when doing some data exploration for trends in the less wrong survey. Again, I may have you misidentified. Such behavior is striking, if true, since it seems to me one of the most basic complaints Less Wrong has about science (somewhat incorrectly).

Edited: Gwern is right (on my misremembering). Either I was skimming and didn't notice Gwern was quoting or I just mixed corrector with corrected. Sorry about that. In possible recompense: What I would recommend you do for data exploration is decide ahead of time if you have some particularly interesting hypothesis or not. If not and you're just going to check lots of stuff, then commit to that and the appropriate multiple test correction at the end. That level of correction then also saves your 'noticing' something interesting and checking it specifically being circular (because you were already checking 'everything' and correcting appropriately).

Comment by tenlier on The Universal Medical Journal Article Error · 2013-04-08T15:56:57.257Z · LW · GW

This is going to be yet another horrible post. I just go meta and personal. Sorry.

I don't understand how this thread (and a few others like it) on stats can happen; in particular, your second point (re: the basic mistake). It is the single solitary thing any person who knows any stats at all knows. Am I wrong? Maybe 'knows' meaning 'understands'. I seem to recall the same error made by Gwern (and pointed out). I mean the system works in the sense that these comments get upvoted, but it is like. . . people having strong technical opinions with very high confidence about Shakespeare without being able to write out a sentence. It is not inconceivable the opinions are good (stroke, language, etc), but it says something very odd about the community that it happens regularly and is not extremely noticed. My impression is that Less Wrong is insane on statistics, particularly, and some areas of physics (and social aspects of science and philosophy).

I didn't read the original post, paper, or anything other than some comment by Goetz which seemed to show he didn't know what a p-value was and had a gigantic mouth. It's possible I've missed something basic. Normally, before concluding a madness in the world, I'd be careful. For me to be right here means madness is very very likely (e.g., if I correctly guess it's -70 outside without checking any data, I know something unusual about where I live).

Comment by tenlier on [Link] RSA Animate: extremely entertaining LW-relevant cartoons · 2012-06-25T17:33:13.411Z · LW · GW

I'll backtrack from "last post" for 6 months to last conversation for 6 months. Viliam, you're a reasonably upvoted dude. You seem pretty normal for these parts. Exactly how annoyed do I get to be that your response to me is dumb? Isn't the commitment to some aspects of rationality exemplified by my complete inability to restrain my annoyance with your being an idiot of some value? Yes, yes, I could be better still.

Again, I think your response is very typically LessWrongian: Wordy, vacant, stupid, irrational, over-confident with weaseliness to pretend otherwise, etc, etc. Do I get downvoted for telling you you're being an idiot in thinking you need an undergrads knowledge in every field in order to know any part of the Sequences are outside that range? I didn't ask for an exhaustive list; I asked for one post exceeding an undergrads knowledge in that field. Do I need to explain that in more detail? Do I lose points for being annoyed you took the time to write all those words and not a second to think about them. Do I get to be insulted that your model of me is basically retarded, from my point of view. Maybe that's all you folks are capable of. Fine. What a shocking coincidence that your example comes from an area you never studied seriously, when I basically asked for just a single example of the opposite. The post you link to is fine but totally and completely uninformative to me.

Listen, of course you can defend your stupidity if you assume I'm a moron. You can say, well, if I don't know what they study in philosophy, I can't say blah isn't covered. Can we not have that idiotic conversation? Can we just acknowledge that if you have a good physics knowledge in some area, you know when the conversation is on physics in that area and when it is exceeded without knowing all other fields? Do I have to be as wordy as you? I clearly am being so; I didn't even bother reading the middle of your post. Just stupidity.

That's not very important, and certainly one of dozens of things wrong. However, it's something you'll see.

So, I pointed out an error you made. LessWrongians like when people point out errors they make. The only reason I pointed it out is that I was annoyed. Ideally you'd find some reason other than annoyance for me to talk to you (repeating the request: A post in the sequences that is informative). You can also conclude it is not worth talking to me when that is all that motivates me. Maybe you think you can modify my behavior, but not without a carrot. Perhaps upvotes and downvotes are supposed to serve in that way, but they don't for me.

Comment by tenlier on [Link] RSA Animate: extremely entertaining LW-relevant cartoons · 2012-06-24T21:10:45.280Z · LW · GW

I'm not surprised at being downvoted, and I don't mean that in the usual defensive way (i.e., "I have such a good model of you it predicts your behaviour and negative reaction is stupid; I'm superior, yadda yada").

My behavior is worthy of being downvoted and some degree of annoyance with me is perfectly reasonable, appropriate, and likeable. Trying to extrapolate my annoyance from my behavior is misleading since I am not responding to what is irritating me and my (hidden specific) annoyance manifests as general irritability. I would say an appropriate criticism of me (which I have attempted to highlight when being critical) is the degree to which I am a collectivist in the way I think about LessWrong.

Let us try one more time, and I have asked questions like this before. Let's make this my last post for a while (6 months); that way if I return to make some sort of status ploy from this consideration, the promise to have left for that time will diminish the value of the plot. So, let us play a game where you answer this request as if it were true rather than a bid for status. Name a post in the sequences I should read that I will find instructive. Is it really so difficult a request that you require a good model of me to answer? Just assume an undergrads knowledge in every field. Is there any physics that superpasses what a good (but not exceptional) undergrad knows? Or biology, computer science, philsophy, etc? I confess to finding the boxing experiment mildly interesting. In return, I will do something on my own time that would be useful to the Singularity Institute if they did it. If it works, I'll return, tell someone, and insult you all with greater justice.

Comment by tenlier on [Link] RSA Animate: extremely entertaining LW-relevant cartoons · 2012-06-24T14:53:52.940Z · LW · GW

There's a number of comments on this post where people wrongly think they know why someone is in disagreement with them:

Arguably others. The other material is either empty (minus humor) or simply correction of these sorts of trivial errors. I think this is very common on LessWrong.

One of my remaining interests in this place is discovering why I find you all pretty unlikeable. This is a change in my viewpoint since I started actually becoming familiar with the joint and is also pretty surprising since I overlap philosophically in ways that usually make me fond of people. I'd say my reaction to this place is roughly what I feel about anti-science types. Of course, a lot of the dialogue here is superficially anti-science, but I don't think that's what's setting me off. I think I really feel like this place is not just superficially anti-science. Something like your ideas about testing hypotheses and modifying beliefs are fine, but your hypotheses trend moronic (circling back to the opening point). Also, mainly concerned with superficialities (e.g., you will have an unduly strong reaction to my using the word "moronic"). Anyway, just some impressions. I think I'll test something (not in a Gwernian way).

Comment by tenlier on SIAI May report · 2012-06-17T23:14:16.532Z · LW · GW

Upvoted to +6 currently. Funny. I guess your answers are what humility and false humility map to if you're an idiot of the type LessWrongians appear to be. That is, when people are being humble (in not posting) you could say they're just afraid of not signalling high standards if they implied more people wanted to read, etc (shmushing your explanations into one). On one's own blog, onus is on the reader who knowingly visited. Wouldn't apply to true idiot LessWrongians; i.e., I expect if you did it, it would be about something simple like maintaining control.

Comment by tenlier on Open Thread, June 16-30, 2012 · 2012-06-17T16:25:12.543Z · LW · GW

Not just going meta for the sake of it: I assert you have not sufficiently thought throught the implications of promoting that sort of non-openness publicly on the board. Perhaps you could PM jsavaltier.

I'm lying, of course. But interesting to register points of strongest divergence between LW and conventional morality (JenniferRM's post, I mean; jsalvatier's is fine and interesting).

Comment by tenlier on Brainstorming additional AI risk reduction ideas · 2012-06-16T15:15:16.975Z · LW · GW

"Indeed, I cannot think of any high school scholarship that is used primarily to collect information for the sponsoring organization (is this really the case?). However, there is good reason for this – no one else is interested in reaching the same group of high school students. SI is the only organization I know of who wants to reach high school students for their research group."

I find this place persistently surprising, which is nice. Try to imagine what you would think if a religious organization did this and how you would feel. It's alright to hold a scholarship to encourage kids to be interested in a topic; not so to garner information for your own purposes, unless that is incredibly clear upfront. Very Gwernian.

Comment by tenlier on Brief response to kalla724 on preserving personal identity with vitrification · 2012-06-16T15:09:38.641Z · LW · GW

It's interesting this post is being upvoted. It reads like jabber to me. I have little idea what it is trying to argue. Stuff like:

"It's likely that when the protein complex undergoes autophosphorylation, other changes occur in the cell as well. If this led to changes in the cell's epigenome, which is very common, and the structure of the epigenome is retained by the cryopreservation, then the cell's epigenome could allow reverse inference of the state of its ion channels. "

Is either meaningless or flawed, probably both. The whold post reminds me of the idea on LessWrong that one might as well just assume Omega will reconstruct you based on trace evidence in the physical world.

Comment by tenlier on Have you changed your mind lately? On what? · 2012-06-05T12:28:33.590Z · LW · GW

The day to day life bit is irrelevant. The volitional aspect is not at all. Take the exact sacrifice you described but make it non-volitional. "torturing yourself working at a startup" becomes slavery when non-volitional. Presumably you find that trade-off less acceptable.

The volitional aspect is the key difference. The fact that your life is rich with examples of volitional sacrifice and poor in examples of forced sacrifice of this type is not some magic result that has something to do with how we treat "real" examples in day to day life. It is entirely because "we" (humans) have tried to minimize the non-volitional sacrifices because they are what we find immoral!

Comment by tenlier on Have you changed your mind lately? On what? · 2012-06-05T06:18:51.568Z · LW · GW

Was there any reason to think I didn't understand exactly what you said the first time? You agree with me and then restate. Fine, but pointless. Additionally, unimaginative re: potential value of torture. Defending lack of imagination in that statement by claiming torture defined in part by primary intent would be inconsistent.

Comment by tenlier on Have you changed your mind lately? On what? · 2012-06-05T04:39:43.993Z · LW · GW

Manipulative phrasing. Of course, it will always seem worth torturing yourself, yadda yadda, when framed as a volitional sacrifice. Does your intuition equally answer yes when asked if it is worth killing somebody to do etc etc? Doubt it (and not a deontological phrasing issue)

Comment by tenlier on Have you changed your mind lately? On what? · 2012-06-05T04:25:24.010Z · LW · GW

Do you really have that preference?

For example, if every but one of trillions of humans was being tortured and had dust specks, would you feel like trading the torture-free human's freedom from torture for the removal of specks from the tortured. If that were so, then you just are showing a fairly usual preference (inequality is bad!) which is probably fine as an approximation of stuff you could formalize consequentially.

But that's just an example. Often there's some context in which your moral intuition is reversed, which is a useful probe.

(usual caveat: haven't read the sequences)

Topic for discussion: Less Wrongians are frequentists to a greater extent than most folk who are intuitively Bayesian. The phrase "I must update on" is half code for (p<0.05) and half signalling, since presumably you're "updating" a lot, just like regular humanssssssssssssssssssssssssssssssssss.

Comment by tenlier on Holden Karnofsky's Singularity Institute critique: Is SI the kind of organization we want to bet on? · 2012-05-12T02:52:54.207Z · LW · GW

Thanks. That's very interesting to me, even as an anecdote. I've heard the opposite here too; that's why I made it a normative statement ("everyone already should know"). Between the missing money and the publication record, I can't imagine what would make SI look worth investing in to me. Yes, that would sometimes lead you astray. But even posts like, oh:

are pretty much the norm around here (I picked that since Luke helped write it). Basically, an insufficient attempt to engage with the conventional wisdom.

How much should you like this place just because they're hardliners on issues you believe in? (generic you). There are lots of compatibilists, materialists, consequentialists, MWIers, or whatever in the world. Less Wrong seems unusual in being rather hardline on these issues, but that's usually more a sign that people have turned it into a social issue than a matter of intellectual conviction (or better, competence). Anyway, probably I've become inappropriately off topic for this page; I'm just rambling. To say at least something on topic: A few months back there was an issue of Nature talking about philanthropy in science (cover article and a few other pieces as I recall); easily searchable I'm sure, and may have some relevance (both as SI tries to get money or "commission" pieces).

Comment by tenlier on Holden Karnofsky's Singularity Institute critique: Is SI the kind of organization we want to bet on? · 2012-05-11T20:44:59.425Z · LW · GW

Sorry, I'm not quite understanding your first paragraph. The subsequent piece, I agree completely with and think applies to a lot SI activities in principle (even if not looking for small donors). The same idea could roughly guide their outlook to "academic outreach", except it's a donation of time rather than money. For example, gaining credibility from a few big names is probably a bad idea, as is trying to play the game of seeking credibility.

On the first paragaph, apologies for repeating, but just clarifying: I'm assuming that everyone already should know that even if you're sympathetic to SI goals, it's a bad idea to donate to them. Maybe it was a useful article for the SI to better understand why people might feel that way. I'm just saying I don't think it was strictly speaking "persuasive" to anyone. Except, I was initially somewhat persuaded that Karnofsky is worth listening to in evaluating SI. I'm just claiming, I guess, that I was way more persuaded that it was worth listening to Karnofsky on this topic than I should have been since I think everything he says is too obvious to imply shared values with me. So, in a few years, if he changes his mind on SI, I've now decided that I won't weight that as very important in my own evaluation. I don't mean that as a criticism of Karnofsky (his write-up was obviously fantastic). I'm just explicating my own thought process.

Comment by tenlier on Holden Karnofsky's Singularity Institute critique: Is SI the kind of organization we want to bet on? · 2012-05-11T19:20:28.500Z · LW · GW

That was my half my initial reaction as well,the other half:

The critique mostly consists of points that are pretty persistently bubbling beneath the surface around here, and get brought up quite a bit. Don't most people regard this as a great summary of their current views, rather than persuasive in any way? In fact, the only effect I suspect this had on most people's thinking was to increase their willingness to listen to Karnofsky in the future if he should change his mind. Since the post is basically directed at LessWrongians as an audience, I find all of that a bit suspicious (not in the sense that he's doing this deliberately).

Also, the only part of the post that interested me was this one (about the SI as an organization); the other stuff seemed kinda minor - praising with faint damns, relative to true outsiders, and so perhaps slightly misleading to LessWrongians.

Reading this (at least a year old, I believe) makes me devalue current protestations:

I just assume people are pretty good at manipulating my opinion, and honestly, that often seems more the focus in the "academic outreach". People who think about signalling (outside of economics, evolution, etc) are usually signalling bad stuff. Paying 20K or whatever to have someone write a review of your theory is also really really interesting, as apparently SI is doing (it's on the balance sheet somewhere for that "commissioned" review; forget the exact amount). Working on a dozen papers on which you might only have 5% involvement (again: or whatever) is also really really interesting. I can't evaluate SI, but they smell totally unlike scientists and quite like philosophers. Which is probably true and only problematic inasmuch as EY thinks other philosophy is mostly bunk. The closest thing to actually performed science on LW I've seen was that bit about rates of evolution, which was rather scatterbrained. If anyone can point me to some science, I'd be grateful. The old joke about Comp Sci (neither about Comp nor Sci) need not apply.

Comment by tenlier on The ethics of breaking belief · 2012-05-09T20:54:45.549Z · LW · GW

I'm not sure I agree. I think my behavior, even if treated favorably by the community, will likely not weaken the norm against multi-voting. Karma seem a much less useful signal here than in communities where the prohibitions against "near" behavior are less strict. That's just from observation, although I think an argument could be made that if a signal really is easy to counterfeit, it's probably less counterfeited when that fact is generally known (no easy opinion arbitrage). But certainly not worth arguing.

Comment by tenlier on The ethics of breaking belief · 2012-05-09T17:41:48.598Z · LW · GW

It's funny to refer to something as a "power" when its an extra 10 seconds work which anybody could have already engaged in without advertising as blatantly as I have. My advertising has also been false.