Comment by jbash on Who owns OpenAI's new language model? · 2019-02-15T13:08:20.578Z · score: 3 (2 votes) · LW · GW

The US has criminal copyright law. I thought it was recent, but Wikipedia says it's actually been around since 1897.

The probability of the governemnt trying to USE it in this kind of case is epsilon over ten, though. And as you say, they'd probably lose if they did, because the neural network isn't really derivative of the Web pages, and even if it is it's probably fair use.

Comment by jbash on Who owns OpenAI's new language model? · 2019-02-15T13:05:09.424Z · score: 12 (4 votes) · LW · GW

So, there are several things that might be "property" here.

The method is probably patentable. The trained network is definitely NOT copyrightable by the clear intent of the copyright law, because it's obvious to any honest interpreter that it's nothing like a "creative work". However, based on their track record, if you took it to the Federal Circuit, they'd probably be willing to pervert the meaning of "creative work" to let somebody enforce a copyright on it based on curation of the training data or something equally specious. They may already have done that in some analogous case.

Property rights in patents or copyrights are separate from property rights in actual devices, copies of networks, or whatever. I can own a book without owning the copyright in the book. And if you own the copyright, that does NOT allow you to demand that I give you my copy of the book, even if you don't have a copy yourself.

The nuclear bomb case would involve a "patent secrecy order"... a power which was in fact created exactly for nuclear bombs. I don't think there's such a thing as a "copyright secrecy order".

They could also probably forcibly buy any patent (yes, under eminent domain). Eminent domain is NOT a "requisition", because eminent domain in the US requires compensation as a constitutional matter. I also don't know if they have any processes in place for exercising eminent domain in the case under discussion, and I doubt they do. Some particular agency has to be authorized and funded to exercise a power like that in any given case.

Even if the government forcibly bought a patent or copyright, that by itself would not entitle the government to be given a copy of the subject matter. I don't know if bits, as opposed to the media they were on, would even be "property".

... but if you REALLY want to go there, well, obviously the US Government, taken as a whole, could obviously pass a law giving itself the power to force OpenAI to hand over copies, delete its own copies, relinquish any patent or copyright rights (possibly with a requirement for money compensation for those last two), stay out of Ireland, and whatever else.

What I'm really puzzled by is the extremely counterfactuality of the question. It just doesn't seem to have any connection at all with how people or institutions actually behave. A neural network that can sound like somebody isn'tt a nuclear bomb, and the political dynamics around it are completely different.

The upper echelons of the US Government won't notice it at all.

If some researcher working for the US Government (or any government) wants a copy of the network for some reason, that person will just send a polite email request to OpenAI, and OpenAI will probably hand it over without worrying about it. If OpenAI doesn't, the question will probably die there. From a practical point of view, that researcher won't be able to make it enough of a priority for the government to even stir itself to figure out which powers might apply.

If some agency of the government suggests to OpenAI that it never release the network to anybody, and gives any kind of meaningful reason, then OpenAI will probably take that into account and comply. That's extremely unlikely, though.

Some government agency trying to actually force OpenAI not to release is farfetched enough not to be worth worrying about, but it would probably come down to timing; OpenAI might be able to release before the government could create any binding order preventing it.

Comment by jbash on Who owns OpenAI's new language model? · 2019-02-15T01:43:06.555Z · score: 11 (4 votes) · LW · GW

The "requisition" question isn't well formed. The US Government has various powers to demand various specific information from various specific people via various specific processes in various specific circumstances for various specific purposes, mostly but not all to do with law enforcement. I guess one or more of those could somehow apply, although the only one I can think of is a general Congressional fact-finding power.

The US Government has no general power to "requisition" anything from anybody. That's just not a thing at all. "Requisition" doesn't mean anything here.

However, if the US Government asked for it, I suspect OpenAI would be happy to hand it over voluntarily. They'd probably also give it to anybody else they thought of as "reputable". What would make you think that they'd want to resist such a request to begin with?

Comment by jbash on The "Post-Singularity Social Contract" and Bostrom's "Vulnerable World Hypothesis" · 2018-11-26T17:09:23.093Z · score: 4 (4 votes) · LW · GW

I don't believe that present-day synthentic biology is anywhere close to being able to create "total destruction" or "almost certain annihilation"... and in fact it may never get there without more-than-human AI.

If you made super-nasty smallpox and spread it all over the place, it would suck, for sure, but it wouldn't kill everybody and it wouldn't destroy "technical civilization", either. Human institutions have survived that sort of thing. The human species has survived much worse. Humans have recovered from really serious population bottlenecks.

Even if it were easy to create any genome you wanted and put it into a functioning organism, nobody knows how to design it. Biology is monstrously complicated. It's not even clear that a human can hold enough of that complexity in mind to ever design a weapon of total destruction. Such a weapon might not even be possible; there are always going to be oddball cases where it doesn't work.

For that matter, you're not even going to be creating super smallpox in your garage, even if you get the synthesis tools. An expert could maybe identify some changes that might make a pathogen worse, but they'd have to test it to be sure. On human subjects. Many of them. Which is conspicuous and expensive and beyond the reach of the garage operator.

I actually can't think of anything already built or specifically projected that you could use to reliably kill everybody or even destroy civilization... except maybe for the AI. Nanotech without AI wouldn't do it. And even the AI involves a lot of unknowns.

Comment by jbash on The Vulnerable World Hypothesis (by Bostrom) · 2018-11-13T15:33:19.131Z · score: 5 (6 votes) · LW · GW

I'm pretty sure that the semi-anarchic default condition is a stable equilibrium. As soon as any power structure started to coalesce, everybody who wasn't a part of it would feel threatened by it and attack it. Once having neutralized the threat, any coalitions that had formed against it would themselves self-destruct in internal mistrust. If it's even possible to leave an equilibrium like that, you definitely can't do it slowly.

On the other hand, the post-semi-anarchic regime is probably fairly unstable... anybody who gets out from under it a little bit can use that to get out from under it more. And many actors have incentives to do so. Maybe you could stay in it, but only if you spent a lot of its enforcement power on the meta-problem of keeping it going.

My views on this may be colored by the fact that Bostrom's vision for the post-semi-anarchic condition in itself sounds like a catastrophic outcome to me, not least because it seems obvious to me that it would immediately be used way, way beyond any kind of catastrophic risk management, to absolutely enforce and entrench any and every social norm that could get 51 percent support, and to absolutely suppress all dissent. YMMV on that part, but anyway I don't think my view of whether it's possible is that strongly determined by my view that it's undesirable.

Comment by jbash on The Vulnerable World Hypothesis (by Bostrom) · 2018-11-07T16:12:32.699Z · score: 6 (5 votes) · LW · GW

It seems to me that this is the crux:

A key concern in the present context is whether the consequences of civilization continuing in the current semi-anarchic default condndition are catastrophic enough to outweigh reasonable objections to the drastic developments that would be required to exit this condition. [Emphasis in original]

That only matters if you're in a position to enact the "drastic developments" (and to do so without incurring some equally bad catastrophe in the process). If you're not in a position to make something happen, then it doesn't matter whether it's the right thing to do or not.

Where's there any sign that any person or group has or ever will have the slightest chance of being able to cause the world to exit the "semi-anarchic default condition", or the slightest idea of how to go about doing so? I've never seen any. So what's the point in talking about it?

Comment by jbash on Implementations of immortality · 2018-11-01T22:04:23.810Z · score: -4 (4 votes) · LW · GW

How, other than by outright mind control, would you expect to call a "mythos" into being?

You can't make other people like what you like. You can't remake the pattern of everybody else's life for your personal comfort, or for the comfort of whatever minority happens to think "enough like you do". If you try, you will engender violent resistance more or less in direct proportion to your actual chance of succeeding.

There's not going to be just one "other side", either. You can't negotiate with anybody and come up with a compromise proposal. There 7.6 billion views of what utopia is, and the number is rising.

So how about if we stick with a cultural norm against trying to force them all into a mold? Total warfare isn't very longevity-promoting.

Comment by jbash on Policy Approval · 2018-07-01T20:25:11.346Z · score: 1 (1 votes) · LW · GW

"Ignoring issues of irrationality or bounded rationality, what an agent wants out of a helper agent is that the helper agent does preferred things."

I don't want a "helper agent" to do what I think I'd prefer it to do. I mean, I REALLY don't want that or anything like that.

If I wanted that, I could just set it up to follow orders to the best of its understanding, and then order it around. The whole point is to make use of the fact that it's smarter than I am and can achieve outcomes I can't foresee in ways I can't think up.

What I intuitively want it to do is what makes me happiest with the state of the world after it's done it. That particular formulation may get hairy with cases where its actions alter my preferences, but just abandoning every possible improvement in favor of my pre-existing guesses about desirable actions isn't a satisfactory answer.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T01:08:54.483Z · score: 2 (1 votes) · LW · GW
The only remaining case for awful advertising that I can see just collapses to a case for arbitrary extortion... which is just... okay you don't believe there will be open code.

So, if the advertising is there by default, that means that the advertiser is already "extorting" my attention, and has already shown a willingness to extort money from me to make the advertising go away.

More correctly, the advertiser already seems to see my attention is their property, rather than mine. If that's how they view it, the price of selling it back to me isn't going to be determined by what they make off the ads. It's going to be determined by how much they think I will pay to be left alone, at least unless I have some other leverage. If you want to call that extortion, then, fine, I believe there'll be extortion. I don't believe they'll think of themselves as engaging in extortion, though.

How would you expect to "fight it"?

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T01:02:51.367Z · score: 2 (1 votes) · LW · GW
All that's needed is a good, low-friction payment platform. We don't have one, right now, so we still see ad-funding everywhere. If BAT takes off, it'll end.

I don't know what BAT is, but I do know that we all wanted micropayments instead of an advertising-supported Internet in 1990.

Even if you have a good micropayment protocol it can be hard to get everybody enrolled. Remember, you have to enroll everybody you'd see on a city bus. That means the 12 year old kid, the homeless guy, the 85-year-old who already has trouble every time they change the coin till, and even the crazy drunk. They all have to be able to figure it out, they all have to be able to get an account, they all have to be able to fund stuff, etc.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T00:58:31.349Z · score: 3 (2 votes) · LW · GW
I can't work with this resignation to the code not being public. That would be an awful awful outcome. The cars wouldn't be able to coordinate, they'd just end up having to drive mostly like humans.

Sure they could coordinate. They'd use the ISO 27B-6 Car Coordination Protocol, which would be negotiated in a mind bogglingly boring and bureaucratic process by the representatives of the various car companies. Those companies would have big bakeoffs where they tested against each other's implementations. They would probably even hire auditors to check one another's implementations.

You could buy a copy of 27B-6 for 250 dollars or so.

The IP network we're talking over uses public protocols. Some specs are free, but you have to pay for others; you couldn't build a smart phone (legally, and including building the chips that go into it) without spending thousands of dollars for copies of standards. And a ton of the products involved have private code.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T00:53:18.123Z · score: 6 (2 votes) · LW · GW
Are people biologically capable of sitting in a theatre in front of an image of an oncoming train?

The whole reason you'd put an image of an oncoming train in a movie would be that it does stress the audience. A little stress can be fun.

I'm not so sure that people would be very comfortable with cows if cows were in the habit of running nearly silently out of nowhere and passing them 2 meters away at 40 km/h. I think after one cow did that in front of me and another one did it behind me a second or two later, while a stream of cows whooshed by on the cross street, I'd start to get pretty nervous about cows. I guess maybe I'd get used to it if I'd had years of experience to show me that cows unerringly avoided me. But I wouldn't bet too much on it.

But that's neither here nor there; I don't think the vehicles could reliably miss the pedestrians to begin with, and you seem to agree.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T00:45:59.464Z · score: 7 (3 votes) · LW · GW
Since the record tells us how long they were with the car, we know it wasn't them who applied the shit

Right. That's why I only said the shit-smearing would happen if the record-making werer somehow avoided. Assuming you can actually keep track of who's using it, you can deter vandalism most of the time.

You might have trouble with out-of-towners or people with nothing to lose, though. And let's not make it too simple; it's a BIG DEAL to ban somebody from the only available form of transportation... that's something you wouldn't want to see done without due process.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-10T15:50:41.559Z · score: 4 (2 votes) · LW · GW

I see. These vehicles can "weave through each other". Great. Can they weave through pedestrians? If they can, are pedestrians biologically capable of avoiding massive stress from being "woven through" by large fast-moving objects?

If it did work,or for any other fleet system, here are some futher predictions:

The code won't be public. People are routinely thrown in prison right now based on output from non-public code.

It will be basically impossible to go anywhere without creating a record. These records will be kept basically forever and will be accessible to more or less any powerful institution... but not to YOU, Citizen. This will most likely be used to profile you for various purposes, many of which you probably wouldn't see as in your interests. If this is somehow avoided, then the interiors of the shared vehicles will literally be smeared with shit.

Those interiors will be utilitarian (perhaps able to survive being hosed off...) and not especially comfortable.

While you're whizzing along on low friction bearings, advertising will be blaring at you. If it's possible to shut it up at all, it will cost you.

Comment by jbash on Claim: Scenario planning is preferable to quantitative forecasting for understanding and coping with AI progress · 2014-07-25T12:26:41.308Z · score: 5 (1 votes) · LW · GW

That seems like a false choice. Why wouldn't you do both?

I think you're more convincing in objecting to the quantitative approach than in defending the scenario approach. Maybe neither one is any good. So another alternative would be to do neither, and avoid the risk of convincing yourself you understand what's going to happen when you really don't.

You can still assume the things you're nearly certain of, if that's useful.

... and how do you plan to use this understanding, assuming you get any? Is it actually going to affect your actions?

Comment by jbash on What attracts smart and curious young people to physics? Should this be encouraged? · 2014-03-13T18:00:12.017Z · score: 14 (16 votes) · LW · GW

I personally find physics attractive because it's as close as you can get to a fundamental, first-principles understanding of how the Universe works. That feels like a terminal goal. Maybe it's secretly a social prestige goal, but it doesn't feel that way.

It seems kind of dark-artsish to try to change people's terminal goals for your own reasons I'm not saying that's never right, just that it seems like something to perk up and be suspicious of.

And you actually haven't even said what your reason actually are. Do you want better allocation of human resources for social goals, or something like that? Then who gets to pick the social goals? Why should anybody have to "justify the attraction" to anybody else?

Or is it individual? Do you think that studying physics will make people less happy than studying economics because they'll get more chances to apply economics?

... and what's this "real world" you're talking about? I get a nervous feeling like there's something coiled up inside that concept waiting to strike.

Comment by jbash on Tell Culture · 2014-01-19T02:38:17.945Z · score: 16 (16 votes) · LW · GW

This may be getting into private-message territory. I haven't paid enough attention to the norms to be sure. But it's easy to not read these...

your comment makes me think that avoiding ambiguity and not appropriating is not enough and perhaps even using it among ourselves is to be avoided, e.g. for the benefit of those 'looking in from the outside' who might be preemptively alienated.

I am, perhaps, "looking in from the outside". I have a lot of history and context with the ideas here, and with the canonical texts, and even with a few of the people, but I'm an extreme "non-joiner". In fact, I tend toward suspicion and distaste for the whole idea of investing my identity in a community, especially one with a label and a relatively clear boundary. I have only a partial model of where that attitude comes from, but I do know that I seem to retain an "outsider" reaction for a lot longer than other people might.

I may be hypersensitive. But I think it's more likely that I'm a not-horrible model of how a completely naive outsider might react to some of these things, even though I can express it in a Less-Wrongish vocabulary.

And of course these posts are indeed visible to people who are only vaguely exploring, or only thinking about "joining", for whatever value of "joining". This is still outreach, right?

Perhaps more accurate would've been for me to say that your original argument could have been applied to the LW-rationality approach generally, or to the bias-correcting approach based on the heuristics and biases literature.

I agree that there are a ton of things that people do all the time that don't seem very useful. If I'm not going to accept all of them, I'd better have a good reason to think this particular social-interaction issue is different.

My reason is that I don't think that epistemic rationality, or even extreme instrumental rationality, has been a critical survival skill for people until very recently (and maybe it still isn't). It's useful, but it doesn't overwhelm everything else, and indeed it seems very likely that the heuristics and biases themselves have clear advantages in many historical contexts.

On the other hand, social cooperation, and especially avoiding constant overt conflict with members of one's own society, are pretty crucial if you want to survive as a human. So I tend to expect institutions and adaptations in that area to be pretty fine-tuned and effective. I don't like a lot of the ways people behave socially, but they seem to work.

Not that strong, I know, but then I haven't seen anything that strong on any side of this.

I reserve some fair probability that there were clear differences in type between the obnoxious attempts and the successful ones, such that your experiences would not be very strong reference class evidence for e.g. Telling.

I don't think I can provide detailed descriptions, but it is definitely true that there are meaningful differences, even major differences, between most of the experiences I've had and the example approach.

The thing is that, if presented with the example approach in real life, I don't think I'd notice those differences. I think I would react heuristically to the unexpected disclosure of internal state, and provisionally put the person into the "annoying/broken" bucket before I got that far.

Then, if I weren't being very, very careful (which I can't necessarily be in all circumstances), the promise that "everything will be OK if you say no" wouldn't be believed, and might even be interpreted as confirmation that the person was going into passive-aggressive mode, and was indeed annoying/broken.

And in the particular example given, I'm being asked to have this presumptively-broken person stay in my house overnight, which is going to make me more wary.

If I were in perfect form and not distracted, I might catch other cues and escape the heuristic, but I think it would be my likely reaction most of the time.

YMMV if, for example, I have prior information that the person is an honest Teller, rather than somebody who incorrectly believes themselves to be a Teller or is just outright dishonest.

I don't have as much discipline in not applying heuristics, or in turning them off at will, as many people here. On the other hand, I have more such discipline than a lot of people... probably including some people here, and definitely including people I suspect one might wish to avoid putting off of the community, should they come exploring.

I also retain the possibility that your reaction to the approaches you disliked was overblown, though my credence for that is far lower now than it was, based on your comment and your claim to be less fazed than average by nonconventional approaches.

I could also be wrong about being less fazed. I know that many nonconventional approaches don't bother me even though they seem to bother others. That doesn't mean that I'm not unknowingly hypersensitive to these nonconventional approaches. I haven't calibrated myself systematically or overtly on them, and they do tickle personal boundary issues where I'm especially likely to be more sensitive than normal.

Have you also accounted for the potential for the negative communication approaches to stick in your mind more than ones you accepted or adopted?

Sure. That's one reason I believe I'd react negatively to the example approach. I haven't been talking about the right way to react. I've been predicting how I likely would react (and saying that I think others might react the same way).

(1) What's your general take on the picture painted by

It rings true to me in a lot of ways. I usually say that I miss the Bay Area's "geekosphere". I miss what is cheesily called the "sense of possibility". I miss the easy availability of tools and resources. I miss the critical mass of people who really want to do cool, new things, whether they want to change the world, or make something beautiful, or even just make a bunch of money they're not sure how to spend. I miss the number of people who really are willing to look hard at how things work, and then change them... in the large if need be. Now that I have a kid, I really miss the wide availability of approaches to education that don't feel so much like "shove 'em in the box and make 'em like it".

On the other hand, that description sounds a little starry-eyed. I've had a bit too much contact with the "hippies" to think they're really always about peace and love, too much contact with the programmers to believe they're nearly as smart as they think they are, and too much contact with the entrepreneurs for "competent" to be the first description that comes to mind. I've also seen some people use "abandoning hangups", or "social efficiency", or whatever, as an excuse to treat others callously. You get a lot of that in the poly community, for example.

I might have missed those issues, or ignored them, 20 or 30 years ago. I might have said things about "wacky leftism" back then, too, things I wouldn't say nearly so strongly now that I know a bit more about how all the parts fit together. It's not that the leftism isn't wacky, it's that the capitalism is wacky, too.

I have not had direct contact with the "cooked" LW-rationalist community, so I can't speak to that. I was in only-somewhat-related circles, I was never very, very social, and I left the area almost 7 years ago after largely "disappearing" from those circles a year or two before that. So I can't confirm or deny what it says about that particular community.

(2) Why do you no longer spend much time in some of the communities you used to? And if you moved away from California, why?

The usual stuff: life intervened. I got busy with other stuff. I went back to work... in the Bay Area or in tech, that can be pretty consuming, and it turns out that it's harder to take the "changing the world" jobs when you're supporting other people. I got divorced. I got depressed. I had personal and romantic ties in Montreal, so I moved... and then I built a life here, with its own rewards and its own obligations and its own web of connections to people who also have reasons to be here. Moving back would be hard now.

But I do still miss it a lot.

Comment by jbash on Tell Culture · 2014-01-18T22:59:15.503Z · score: 9 (5 votes) · LW · GW

I don't have sociological statistics on that, and will have to retract "almost every culture" as a statement of fact.

My general impression is that the US and Western Europe are about as "Ask" as it gets, and in a lot of other cultures you're pretty unlikely to find any "Ask families" at all. I do know that "Offer" exists.

Comment by jbash on Tell Culture · 2014-01-18T22:30:16.891Z · score: 28 (26 votes) · LW · GW

So, as long as we're Telling, I'm going to talk about my own internal state. I think at least some aspects of my reactions may be shared by other people, including people whom readers of this thread may be interested in influencing or interacting with. Anybody who's not interested in this should definitely stop reading. I promise I won't be offended. :-)

Although I still think I had a point, if I look back at why I really wrote my response, I think that point was mostly "cover" for a less acceptable motivation. I think I really wrote it mostly out of irritation with the way the word "rationalist" was used in the original posting. And I find myself feeling the same way in response to some of your reply.

My first reaction is to see it as an ugly form of appropriation to take the word "rationalist" to mean "person identified with the Less Wrong community or associated communities, especially if said member uses jargon A, B, and C, and subscribes to only-tangentially-rational norms X, Y, and Z". Especially when it's coupled with signals of group superiority like "don't try this with Muggles" (used to be "mundanes"). It provokes an immediate "screw you" reaction.

I expressed my irritation only as hopefully-veiled but still obnoxious snark(for which I am sorry), but it was there.

The Bay Area, and presumably New York and the world, contain people who are committed to rationality by almost any definition, yet who've never read the Sequences, probably wouldn't want to, and probably have no great interest in the community I think you mean. Some of them have pretty high profiles, too. Making a land grab for the word "rationalist" probably doesn't make most of those people want into the club, and neither does name calling. Both seem more likely to make them think the club is composed of jerks.

On another, but perhaps related, front...

By my last paragraph's description of my reaction, I didn't mean to write off the "Tell" suggestion completely as a suggestion about what social norms should be, whether in a subculture or in The Wider Culture(TM). I'm pretty skeptical about the idea, but I wasn't trying to be completely dismissive there.

In that part, I was, perhaps amid more snark, trying to warn about a possibly inobvious reaction. What I was trying to describe was how I, as an individual, actually envision myself reacting to the stated tactic for introducing the "Tell" approach.

I used to spend a fair amount of time, in the Bay Area and elsewhere, with communities that overlap with, and/or could be seen as antecedents of, the Less Wrong/CFAR/MIRI "rationalists". In those communities, I met a lot of people who had unconventional approaches to interacting with others. I often found some of those people annoying and aversive. That's true even though I'm no grandmaster of "normal" social approaches myself, and even though I suspect that I am far less sensitive to deviations from them than the average bear.

What I would truly expect to go through my mind would be something like "Oh, no, yet another one of those people who think removing all filters will improve society, and want me to be part of the grand experiment"... or possibly "Oh, no, yet another one of those people who don't realize that filters are expected at all", or, worse "Oh, no, one of those people who think they can use some kind of philosophical gobbledygook to justify inconsiderate passive-aggressive pushiness". Because I've met all of those more than once.

That would cause discomfort, and in the future I'd tend to avoid the source of that discomfort. I was trying to point out was that the strategy might appear to work, but still backfire, because the immediate feedback from the interlocutor wouldn't necessarily be honest.

Maybe I'd get over it, but maybe I wouldn't, too.

For the record on your first paragraph, I'm really, really skeptical of Crocker's rules working over the long term, but I admit I've never tried them. I don't think the rest of the things you mention are similar.

I don't know of any common social norm against, say, tabooing words, or asking about anticipated experiences. I think you can use those sorts of methods with more or less anybody. You may run into resistance or anger if somebody thinks you're trying to pull a nasty rhetorical trick, but you can defuse that if you take the time to cross the inferential distance gently, and starting on the project before you're in the middle of a heated conflict where the other person will reject absolutely anything you suggest.

For that matter, you can often just quietly stop using a word without saying anything at all about "tabooing" it.

Likewise, I don't think most people mind "I'm confused"... unless it's obviously dishonest and meant to provide plausibly deniable cover to some following snark.

On the other hand, I do see lots of social norms around what tactics are and are not OK for getting somebody else to do something for you, and also around how much of your internal state you share at what stages of intimacy. So I think this is different in kind.

And of course I may also have completely misread your comment...

[On edit, cleaned up a couple of proofreading errors]

Comment by jbash on Tell Culture · 2014-01-18T14:16:21.056Z · score: 28 (34 votes) · LW · GW

Ya know, after thousands of years of trying it out in all kinds of environments, it seems as though almost every culture on Earth settles on "Guess", with maybe a touch of "Ask" in the more overbearing ones. A common modification to "Guess" is "Offer", where the mere mention of a possible opportunity to help out is treated as creating almost a positive obligation to notice the need and make a spontaneous offer.

From where I sit, that's pretty strong evidence that "Guess" or maybe "Offer" is more suited to collective human nature. There's a pretty heavy burden of proof on any "rationalist" who wants to change it.

It's also not so obvious that you can effectively change conventions like these by just starting in and asking others to change. If you tried your "developing trust" tactic with me, I'd probably play along to avoid conflict on one occasion, and avoid YOU after that.

Comment by jbash on 2013 Less Wrong Census/Survey · 2013-11-22T15:54:24.555Z · score: 13 (13 votes) · LW · GW

You're right; my error. Sorry.

Comment by jbash on 2013 Less Wrong Census/Survey · 2013-11-22T14:36:17.296Z · score: 8 (14 votes) · LW · GW

Not taken, and will not be taken as long as it demands that I log in with Google (or Facebook, or anything else other than maybe a local Less Wrong account).

Comment by jbash on New report: Intelligence Explosion Microeconomics · 2013-04-29T20:58:22.814Z · score: 8 (16 votes) · LW · GW


The first four or five paragraphs were just bloviation, and I stopped there.

I know you think you can get away with it in "popular education", but if you want to be taken seriously in technical discourse, then you need to rein in the pontification.

Comment by jbash on Ritual 2012: A Moment of Darkness · 2012-12-28T14:51:39.430Z · score: 2 (2 votes) · LW · GW

Does anybody in your group have children? It doesn't seem to me that what you have in your ritual book would serve them very well. Even ignoring any possible desire to "recruit" the children themselves, that means that adults who have kids will have an incentive to leave the community.

Maybe it's just that I personally was raised with zero attendance at anything remotely that structured, but it's hard for me to imagine kids sitting through all those highly abstract stories, many of which rely on lots of background concepts, and being anything but bored stiff (and probably annoyed). Am I wrong?

Even if they could sit through it happily, there's the question of whether having them chant things they don't understand respects their agency or promotes their own growth toward reasoned examination of the world and their beliefs about it. Especially when, as somebody else has mentioned, the ritual includes stuff that's not just "rationalism". Could there be more to help them understand how to get to the concepts, so that they could have a reasonable claim not to just be repeating "scripture"?

Or am I just worrying about something unreal?

Comment by jbash on The Useful Idea of Truth · 2012-10-02T20:42:35.978Z · score: 7 (9 votes) · LW · GW

Actually, "relativist" isn't a lot better, because it's still pretty clear who's meant, and it's a very charged term in some political discussions.

I think it's a bad rhetorical strategy to mock the cognitive style of a particular academic discipline, or of a particular school within a discipline, even if you know all about that discipline. That's not because you'll convert people who are steeped in the way of thinking you're trying to counter, but because you can end up pushing the "undecided" to their side.

Let's say we have a bright young student who is, to oversimplify, on the cusp of going down either the path of Good ("parsimony counts", "there's an objective way to determine what hypothesis is simpler", "it looks like there's an exterior, shared reality", "we can improve our maps"...) or the path of Evil ("all concepts start out equal", "we can make arbitrary maps", "truth is determined by politics" ...). Well, that bright young student isn't a perfectly rational being. If the advocates for Good look like they're being jerks and mocking the advocates for Evil, that may be enough to push that person down the path of Evil.

Wulky Wilkinson is the mind killer. Or so it seems to me.