Posts

Comments

Comment by jbash on What should I teach to my future daughter? · 2020-06-19T16:25:33.782Z · score: 12 (5 votes) · LW · GW

I suggest that you relax a bit. She's not going to be learning programming or anything like it for years, regardless. Newborns spend months to years just learning how to use their own limbs and process the information from their own senses.

And I've never heard any evidence at all that, say, programming, is particularly important to learn all that early in life. Manual/mental skills like musical performance seem to turn out best if started early (but not necessarily as a toddler!). Languages, too. I could even imagine that critical logical thinking would benefit from early exposure. But programming? That's something you can figure out.

In the long run, meta-skills are important... things that let you decide for yourself which skills to learn and learn them on your own. And things that let you evaluate both the truth and the usefulness of all the stuff that everybody else is trying to teach you. Beyond that, the more flexible and generalizable the better.

But the biggest thing is this: she's going to be her own person. By the time she's old enough to be taught the kinds of hands-on skills you're talking about, she's going to have her own ideas about what she wants to learn. "This civilization" isn't some kind of apocalyptic dystopia, and you don't know "what is coming". In all probability, it will all add up to normality. In all probability, she will muddle through. ... and in all probability, neither you nor anybody here can guess what very specific skills she's going to need. Assuming, that is, that human skills are even relevant at all when she grows up.

Please don't drive her insane by pushing "needed practical skills". Let her enjoy life, and let her learn by doing things that engage her. While you're unlikely to make a monster impact by predicting what she'll need in the future, you will definitely have an impact on her present, and maybe on how she sees learning in general..

Comment by jbash on On “COVID-19 Superspreader Events in 28 Countries: Critical Patterns and Lessons” · 2020-04-29T20:32:19.140Z · score: 3 (6 votes) · LW · GW

Um, direction of airflow, by definition, doesn't affect the ballistic transmission of anything. On the other hand, the longer something hangs in the air, the more it's affected by the direction of airflow, and that applies all the way down to gas molecules.

Singing or breathing hard seems likely to increase droplets of all sizes right down to submicron.

Comment by jbash on How credible is the theory that COVID19 escaped from a Wuhan Lab? · 2020-04-04T15:12:15.165Z · score: 0 (4 votes) · LW · GW

Because it is likely to:

  1. Damage international relations and cooperation in the middle of a pandemic. You have US Senators out there pushing this thing. That's going to offend the Chinese government. At the absolute least, it will distract people from cooperating.
  2. Cause another wave of anti-Asian, and specifically anti-Chinese, racist attacks. Such attacks happened even when everybody thought the whole thing was an accident. If you make them believe it was deliberate (on edit: they will believe this even if the rumor is that it was an accident, and there's still a big problem if they only believe it was careless), they will definitely do it more.

In short, providing oxygen to rumors like this makes them more credible and more available to idiots. Idiots are predictable elements of the world, and you can reasonably anticipate their responses to the conditions you create.

Comment by jbash on How credible is the theory that COVID19 escaped from a Wuhan Lab? · 2020-04-04T02:58:07.445Z · score: -13 (7 votes) · LW · GW
  1. This is not particularly credible.

  2. It's also not particularly important.

  3. Even if it were 100 percent true, it would be what I believe Less Wrong likes to call an "infohazard". Unless you want to literally get people killed, you don't want to spread this stuff.

Comment by jbash on Bogus Exam Questions · 2020-03-28T14:16:48.225Z · score: 14 (5 votes) · LW · GW

Erm, the students are not expected to understand the math, and are not being tested on their understanding of the math. The professor doesn't understand the math either. I mean that there is epsilon chance that any given psychology professor, especially an educational psychology professor, has ever heard the phrase "sigma algebra". If they have, it's because they're math hobbyists, not because it's ever come up in their professional work.

In a psychology course, "runs a multiple regression" means "follows a specific procedure analogous to a computer program". The whole thing is a black box. The decision about when it's valid to use that procedure is made based on various rules of thumb, which are passed around mostly as folklore, and are themselves understood and followed with varying degrees of conscienciousness. The same applies to the question of what the results mean.

It's absolutely a valid criticism that people in fields like psychology tend to misapply statistical methods and misunderstand statistical results. They do need an intuitive understanding of what they're doing, good enough to know when they can apply the method and what the results actually show. And it's true that most of them probably don't have that understanding.

On the other hand, it's also true that you don't need to understand the math at a very deep level to use the techniques practically. They don't need to be able to derive the method from first principals, nor to be able to rigorously prove that everything works, nor to recognize pathological corner cases that will literally never be encountered in real applications. Those are unreasonable things to ask. Remember that their goal is to understand psychology, not to understand mathematics.

Students in all kinds of science and engineering are allowed to use instruments that they couldn't build themselves, and practitioners still more so. They're not expected to understand every possible corner-case limitation of those instruments, either. At most they're given some rules for when they can or can't use and rely on an instrument.

It's still a really lame question, though, and the fact that it's asked does show a problem. Nobody seems to be looking for even an intuitive grasp of all the stuff that's lurking in that word "expect".

Comment by jbash on [HPMOR] Harry - example or anti-example? · 2020-02-25T13:27:16.061Z · score: 4 (3 votes) · LW · GW

Chapter 122, paragraph beginning with "And right now, Harry James Potter-Evans-Verres was still a walking catastrophe"... and the stuff immediately preceding and following it. Seems like a pretty direct answer to your question.

Comment by jbash on [Review] On the Chatham House Rule (Ben Pace, Dec 2019) · 2019-12-12T18:47:48.201Z · score: 8 (3 votes) · LW · GW

I don't think Apple is a useful model here at all.

I'm pretty sure secrecy has been key for Apple's ability to control its brand,

Well, Apple thinks so anyway. They may or may not be right, and "control of the brand" may or may not be important anyway. But anyway it's true that Apple can keep secrets to some degree.

and it's not just slowed itself down,

Apple is a unitary organization, though. It has a boundary. It's small enough that you can find the person whose job it is to care about any given issue, and you are unlikely to miss anybody who needs to know. It has well-defined procedures and effective enforcement. Its secrets have a relatively short lifetime of maybe as much as 2 or 3 years.

Anybody who is spying on Apple is likely to be either a lot smaller, or heavily constrained in how they can safely use any secret they get. If I'm at Google and I steal something from Apple, I can't publicize it internally, and in fact I run a very large risk of getting fired or turned in to law enforcement if I tell it to the wrong person internally.

Apple has no adversary with a disproportionate internal communication advantage, at least not with respect to any secrets that come from Apple.

The color of the next iPhone is never going to be as interesting to any adversary as an X-risk-level AI secret. And if, say, MIRI actually has a secret that is X-risk-level, then anybody who steals it, and who's in a position to actually use it, is not likely to feel the least bit constrained by fear of MIRI's retaliation in using it to do whatever X-risky thing they may be doing.

Comment by jbash on [Review] On the Chatham House Rule (Ben Pace, Dec 2019) · 2019-12-12T18:34:39.081Z · score: 38 (7 votes) · LW · GW

MIRI's written about going non-disclosed by default. I expect you to think this is fine and probably good and not too relevant, because it's not (as far as the writeup suggests) an attempt to keep secrets from the US government, and you expect they'd fail at that. Is that right?

No, I think it's probably very counterproductive, depending on what it really means in practice. I wasn't quite sure what the balance was between "We are going to actively try to keep this secret" and "It's taking too much of our time to write all of this up".

On the secrecy side of that, the problem isn't whether or not MIRI's secrecy works (although it probably won't)[1]. The problem is with the cost and impact on their own community from their trying to do it. I'm going to go into that further down this tome.

And OpenAI is attempting to push more careful release practises into the overton window of discussion in the ML communities (my summary is here). [...] For example, there are lots of great researchers in the world that aren't paid by governments, and those people cannot get the ideas [...]

That whole GPT thing was just strange.

OpenAI didn't conceal any of the ideas at all. They held back the full version of the actual trained network, but as I recall they published all of the methods they used to create it. Although a big data blob like the network is relatively easy to keep secret, if your goal is to slow down other research, controlling the network isn't going to be effective at all.

... and I don't think that slowing down follow-on research was their goal. If I remember right, they seemed to be worried that people would abuse the actual network they'd trained. That was indeed unrealistic. I've seen the text from the full network, and played with giving it prompts and seeing what comes out. Frankly, the thing is useless for fooling anybody and wouldn't be worth anybody's time. You could do better by driving a manually created grammar with random numbers, and people already do that.

Treating it like a Big Deal just made OpenAI look grossly out of touch. I wonder how long it took them to get the cherry-picked examples they published when they made their announcement...

So, yes, I thought OpenAI was being unrealistic, although it's not the kind of "romanticization" I had in mind. I just can't figure out what they could have stood to gain by that particular move.

All that said, I don't think I object to "more careful release practices", in the sense of giving a little thought to what you hand out. My objections are more to things like--

  1. Secrecy-by-default, or treating it as cost-free to make something secret. It's impractical to have too many secrets, and tends to dilute your protection for any secrets you actually do truly need. In the specific case of AI risk, I think it also changes the balance of speed between you and your adversaries... for the worse. I'll explain more about that below when I talk about MIRI.

  2. The idea that you can just "not release things", without very strict formal controls and institutional boundaries, and have that actually work in any meaningful way. There seems to be a lot of "illusion of control" thinking going on. Real secrecy is hard, and it gets harder fast if it has to last a long time.

To set the frame for the rest, I'm going to bloviate a bit about how I've seen secrecy to work in general.

One of the "secrets of secrecy" is that, at any scale beyond two or three people, it's more about controlling diffusion rates than about creating absolute barriers. Information interesting enough to care about will leak eventually.

You have some amount of control over the diffusion rate within some specific domains, and at their boundaries. Once information breaks out into a domain you do not control, it will spread according to the conditions in that new domain regardless of what you do. When information hits a new community, there's a step change in how fast it propagates.

Which brings up next not-very-secret secret: I'm wrong to talk about a "diffusion rate". The numbers aren't big enough to smooth out random fluctuations the way they are for molecules. Information tends to move in jumps for lots of reasons. Something may stay "secret" for a really long time just because nobody notices it... and then become big news when it gets to somebody who actively propagates it, or to somebody who sees an implication others didn't. A big part of propagation is the framing and setting; if you pair some information with an explanation of why it matters, and release it into a community with a lot of members who care, it will move much, much faster than if you don't.[2]

So, now, MIRI's approach...

The problem with what MIRI seems to be doing is that it disproportionately slows the movement of information within their own community and among their allies. In most cases, they will probably hurt themselves more than they hurt their "adversaries".

Ideas will still spread among the "good guys", but unreliably, slowly, through an unpredictable rumor mill, with much negotiation and everybody worrying at every turn about what to tell everybody else [3]. That keeps problems from getting solved. It can't be fixed by telling the people who "need to know", because MIRI (or whoever) won't know who those people are, especially-but-not-only if they're also being secretive.

Meanwhile, MIRI can't rely on keeping absolute secrets from anybody for any meaningful amount of time. And they'll probably have a relatively small effect on institutions that could actually do dangerous development. Assuming it's actually interesting, once one of MIRI's secrets gets to somebody who happens to be part of some "adversary" institution, it will be propagated throughout that institution, possibly very quickly. It may even get formally announced in the internal newsletter. It even has a chance of moving on from there into that first institution's own institutional adversaries, because they spy on each other.

But the "adversaries" are still relatively good at secrecy, especially from non-peers, so any follow-on ideas they produce will be slower to propagate back out into the public where MIRI et al can benefit from them.

The advantage the AI risk and X-risk communities have is, if you will, flexibility: they can get their heads around new ideas relatively quickly, adapt, act on implications, build one idea on another, and change their course relatively rapidly. The corresponding, closely related disadvantage is weakness in coordinating work on a large scale toward specific, agreed-upon goals (like say big scary AI development projects).

Worrying too much about secrecy throws away the advantage, but doesn't cure the disadvantage. Curing the disadvantage requires a culture and a set of material resources that I don't believe MIRI and friends can ever develop... and that would probably torpedo their effectiveness if they did develop them.

By their nature, they are going to be the people who are arguing against some development program that everybody else is for. Maybe against programs that have already got a lot of investment behind them before some problem becomes clear. That makes them intrinsically less acceptable as "team players". And they can't easily focus on doing a single project; they have to worry about any possible way of doing it wrong. The structures that are good at building dangerous projects aren't necessarily the same as the structures that are good at stoppping them.

If the AI safety community loses its agility advantage, it's not gonna have much left.

MIRI will probably also lose some donors and collaborators, and have more trouble recruiting new ones as time goes on. People will forget they exist because they're not talking, and there's a certain reluctance to give people money or attention in exchange for "pigs in pokes"... or even to spend the effort to engage and find out what's in the poke.

A couple of other notes:

Sometimes people talk about spreading defensive ideas without spreading the corresponding offensive ideas. In AI, that comes out as wanting to talk about safety measures without saying anything about how to increase capability.

In computer security, it comes out as cryptic announcments to "protect this port from this type of traffic until you apply this patch"... and it almost never works for long. The mere fact that you're talking about some specific subject is enough to get people interested and make them figure out the offensive side. It can work for a couple of weeks for a security bug announcement, but beyond that it will almost always just backfire by drawing attention. And it's very rare to be able to improve a defense without understanding the actual threat.

Edited the next day in an attempt to fix the footnotes... paragraphs after the first in each footnote were being left in the main flow.


  1. As for keeping secrets from any major government...
    .
    First, I still prefer to talk about the Chinese government. The US government seems less likely to be a player here. Probably the most important reason is that most parts of the US government apparatus see things like AI development as a job for "industry", which they tend to believe should be a very clearly separate sphere from "government". That's kind of different from the Chinese attitude, and it matters. Another reason is that the US government tends to have certain legal constraints and certain scruples that limit their effectiveness in penetrating secrecy.
    .
    I threw the US in as a reminder that China is far from the only issue, and I chose them because they used to be more interesting back during the cold war, and perhaps could be again if they got worried enough about "national security".
    .
    But if any government, including the US, decides that MIRI has a lot of important "national security" information, and decides to look hard at them, then, yes, MIRI will largely fail to keep secrets. They may not fail completely. They may be able to keep some things off the radar, for a while. But that's less likely for the most important things, and it will get harder the more people they convince that they may have information that's worth looking at. Which they need to do.
    .
    They'll probably even have information leaking into institutions that aren't actively spying on them, and aren't governments, either.
    .
    But that all that just leaves them where they started anyway. If there were no cost to it, it wouldn't be a problem. ↩︎

  2. You can also get independent discoveries creating new, unpredictable starting points for diffusion. Often independent discoveries get easier as time goes on and the general "background" information improves. If you thought of something, even something really new, that can be a signal that conditions are making it easier for the next person to think of the same thing. I've seen security bugs with many independent discoveries.
    .
    Not to mention pathologies like one community thinking something is a big secret, and then seeing it break out from some other, sometimes much larger community that has treated it as common knowledge for ages. ↩︎

  3. If you ever get to the point where mostly-unaffiliated individuals are having to make complicated decisions about what should be shared, or having to think hard about what they have and have not committed themselves not to share, you are 95 percent of the way to fully hosed.
    .
    That sort of thing kind of works for industrial NDAs, but the reason it works is that, regardless of what people have convinced themselves to believe, most industrial "secret sauce" is pretty boring, and the rest tends to be either so specific and detailed that it obviously covered by any NDA. AND you usually only care about relatively few competitors, most of whose employees don't get paid enough to get sued. That's very different from some really inobvious world-shaking insight that makes the difference between low-power "safe" AI and high-power "unsafe" AI. ↩︎

Comment by jbash on [Review] On the Chatham House Rule (Ben Pace, Dec 2019) · 2019-12-10T14:01:51.113Z · score: 19 (10 votes) · LW · GW

I guess this is sort of an agreement with the post... but I don't think the post goes far enough.

Whoever "you guys" are, all you'll do by adopting a lot of secrecy is slow yourselves down radically, while making sure that people who are better than you are at secrecy, who are better than you are at penetrating secrecy, who have more resources than you do, and who are better at coordinated action than you are, will know nearly everything you do, and will also know many things that you don't know.

They will "scoop" you at every important point. And you have approximately zero chance of ever catching up with them on any of their advantages.

The best case long term outcome of an emphasis on keeping dangerous ideas secret would be that particular elements within the Chinese government (or maybe the US government, not that the corresponding elements would necessarily be much better) would get it right when they consolidated their current worldview's permanent, unchallengeable control over all human affairs. That control could very well include making it impossible for anyone to even want to change the values being enforced. The sorts of people most likely to be ahead throughout any race, and most likely to win if there's a hard "end", would be completely comfortable with re-educating you to cure your disharmonious counter-revolutionary attitudes. If they couldn't do that, they'd definitely arrange things so that you couldn't ever communicate those attitudes or coordinate around them.

The worst case outcome is that somebody outright destroys the world in a way you might have been able to talk them out of.

Secrecy destroys your influence over people who might otherwise take warnings from you. Nobody is going to change any actions without a clear and detailed explanation of the reasons. And you can't necessarily know who needs to be given such an explanation. In fact, people you might consider members of "your community" could end up making nasty mistakes because they don't know something you do.

I've spent a lot of my career on the sorts of things where people try to keep secrets, and my overall impression of the AI risk and X-risk communities (including Nick Bostrom) is that they have a profoundly unrealistic, sometimes outright romanticized, view of what secrecy is and what it can do for them (and an unduly rosy view of their prospects for unanimous action in general).

Comment by jbash on Who owns OpenAI's new language model? · 2019-02-15T13:08:20.578Z · score: 3 (2 votes) · LW · GW

The US has criminal copyright law. I thought it was recent, but Wikipedia says it's actually been around since 1897.

The probability of the governemnt trying to USE it in this kind of case is epsilon over ten, though. And as you say, they'd probably lose if they did, because the neural network isn't really derivative of the Web pages, and even if it is it's probably fair use.

Comment by jbash on Who owns OpenAI's new language model? · 2019-02-15T13:05:09.424Z · score: 12 (4 votes) · LW · GW

So, there are several things that might be "property" here.

The method is probably patentable. The trained network is definitely NOT copyrightable by the clear intent of the copyright law, because it's obvious to any honest interpreter that it's nothing like a "creative work". However, based on their track record, if you took it to the Federal Circuit, they'd probably be willing to pervert the meaning of "creative work" to let somebody enforce a copyright on it based on curation of the training data or something equally specious. They may already have done that in some analogous case.

Property rights in patents or copyrights are separate from property rights in actual devices, copies of networks, or whatever. I can own a book without owning the copyright in the book. And if you own the copyright, that does NOT allow you to demand that I give you my copy of the book, even if you don't have a copy yourself.

The nuclear bomb case would involve a "patent secrecy order"... a power which was in fact created exactly for nuclear bombs. I don't think there's such a thing as a "copyright secrecy order".

They could also probably forcibly buy any patent (yes, under eminent domain). Eminent domain is NOT a "requisition", because eminent domain in the US requires compensation as a constitutional matter. I also don't know if they have any processes in place for exercising eminent domain in the case under discussion, and I doubt they do. Some particular agency has to be authorized and funded to exercise a power like that in any given case.

Even if the government forcibly bought a patent or copyright, that by itself would not entitle the government to be given a copy of the subject matter. I don't know if bits, as opposed to the media they were on, would even be "property".

... but if you REALLY want to go there, well, obviously the US Government, taken as a whole, could obviously pass a law giving itself the power to force OpenAI to hand over copies, delete its own copies, relinquish any patent or copyright rights (possibly with a requirement for money compensation for those last two), stay out of Ireland, and whatever else.

What I'm really puzzled by is the extremely counterfactuality of the question. It just doesn't seem to have any connection at all with how people or institutions actually behave. A neural network that can sound like somebody isn'tt a nuclear bomb, and the political dynamics around it are completely different.

The upper echelons of the US Government won't notice it at all.

If some researcher working for the US Government (or any government) wants a copy of the network for some reason, that person will just send a polite email request to OpenAI, and OpenAI will probably hand it over without worrying about it. If OpenAI doesn't, the question will probably die there. From a practical point of view, that researcher won't be able to make it enough of a priority for the government to even stir itself to figure out which powers might apply.

If some agency of the government suggests to OpenAI that it never release the network to anybody, and gives any kind of meaningful reason, then OpenAI will probably take that into account and comply. That's extremely unlikely, though.

Some government agency trying to actually force OpenAI not to release is farfetched enough not to be worth worrying about, but it would probably come down to timing; OpenAI might be able to release before the government could create any binding order preventing it.

Comment by jbash on Who owns OpenAI's new language model? · 2019-02-15T01:43:06.555Z · score: 11 (4 votes) · LW · GW

The "requisition" question isn't well formed. The US Government has various powers to demand various specific information from various specific people via various specific processes in various specific circumstances for various specific purposes, mostly but not all to do with law enforcement. I guess one or more of those could somehow apply, although the only one I can think of is a general Congressional fact-finding power.

The US Government has no general power to "requisition" anything from anybody. That's just not a thing at all. "Requisition" doesn't mean anything here.

However, if the US Government asked for it, I suspect OpenAI would be happy to hand it over voluntarily. They'd probably also give it to anybody else they thought of as "reputable". What would make you think that they'd want to resist such a request to begin with?

Comment by jbash on The "Post-Singularity Social Contract" and Bostrom's "Vulnerable World Hypothesis" · 2018-11-26T17:09:23.093Z · score: 4 (4 votes) · LW · GW

I don't believe that present-day synthentic biology is anywhere close to being able to create "total destruction" or "almost certain annihilation"... and in fact it may never get there without more-than-human AI.

If you made super-nasty smallpox and spread it all over the place, it would suck, for sure, but it wouldn't kill everybody and it wouldn't destroy "technical civilization", either. Human institutions have survived that sort of thing. The human species has survived much worse. Humans have recovered from really serious population bottlenecks.

Even if it were easy to create any genome you wanted and put it into a functioning organism, nobody knows how to design it. Biology is monstrously complicated. It's not even clear that a human can hold enough of that complexity in mind to ever design a weapon of total destruction. Such a weapon might not even be possible; there are always going to be oddball cases where it doesn't work.

For that matter, you're not even going to be creating super smallpox in your garage, even if you get the synthesis tools. An expert could maybe identify some changes that might make a pathogen worse, but they'd have to test it to be sure. On human subjects. Many of them. Which is conspicuous and expensive and beyond the reach of the garage operator.

I actually can't think of anything already built or specifically projected that you could use to reliably kill everybody or even destroy civilization... except maybe for the AI. Nanotech without AI wouldn't do it. And even the AI involves a lot of unknowns.

Comment by jbash on The Vulnerable World Hypothesis (by Bostrom) · 2018-11-13T15:33:19.131Z · score: 5 (6 votes) · LW · GW

I'm pretty sure that the semi-anarchic default condition is a stable equilibrium. As soon as any power structure started to coalesce, everybody who wasn't a part of it would feel threatened by it and attack it. Once having neutralized the threat, any coalitions that had formed against it would themselves self-destruct in internal mistrust. If it's even possible to leave an equilibrium like that, you definitely can't do it slowly.

On the other hand, the post-semi-anarchic regime is probably fairly unstable... anybody who gets out from under it a little bit can use that to get out from under it more. And many actors have incentives to do so. Maybe you could stay in it, but only if you spent a lot of its enforcement power on the meta-problem of keeping it going.

My views on this may be colored by the fact that Bostrom's vision for the post-semi-anarchic condition in itself sounds like a catastrophic outcome to me, not least because it seems obvious to me that it would immediately be used way, way beyond any kind of catastrophic risk management, to absolutely enforce and entrench any and every social norm that could get 51 percent support, and to absolutely suppress all dissent. YMMV on that part, but anyway I don't think my view of whether it's possible is that strongly determined by my view that it's undesirable.

Comment by jbash on The Vulnerable World Hypothesis (by Bostrom) · 2018-11-07T16:12:32.699Z · score: 6 (5 votes) · LW · GW

It seems to me that this is the crux:

A key concern in the present context is whether the consequences of civilization continuing in the current semi-anarchic default condndition are catastrophic enough to outweigh reasonable objections to the drastic developments that would be required to exit this condition. [Emphasis in original]

That only matters if you're in a position to enact the "drastic developments" (and to do so without incurring some equally bad catastrophe in the process). If you're not in a position to make something happen, then it doesn't matter whether it's the right thing to do or not.

Where's there any sign that any person or group has or ever will have the slightest chance of being able to cause the world to exit the "semi-anarchic default condition", or the slightest idea of how to go about doing so? I've never seen any. So what's the point in talking about it?

Comment by jbash on Implementations of immortality · 2018-11-01T22:04:23.810Z · score: -4 (4 votes) · LW · GW

How, other than by outright mind control, would you expect to call a "mythos" into being?

You can't make other people like what you like. You can't remake the pattern of everybody else's life for your personal comfort, or for the comfort of whatever minority happens to think "enough like you do". If you try, you will engender violent resistance more or less in direct proportion to your actual chance of succeeding.

There's not going to be just one "other side", either. You can't negotiate with anybody and come up with a compromise proposal. There 7.6 billion views of what utopia is, and the number is rising.

So how about if we stick with a cultural norm against trying to force them all into a mold? Total warfare isn't very longevity-promoting.

Comment by jbash on Policy Approval · 2018-07-01T20:25:11.346Z · score: 1 (1 votes) · LW · GW

"Ignoring issues of irrationality or bounded rationality, what an agent wants out of a helper agent is that the helper agent does preferred things."

I don't want a "helper agent" to do what I think I'd prefer it to do. I mean, I REALLY don't want that or anything like that.

If I wanted that, I could just set it up to follow orders to the best of its understanding, and then order it around. The whole point is to make use of the fact that it's smarter than I am and can achieve outcomes I can't foresee in ways I can't think up.

What I intuitively want it to do is what makes me happiest with the state of the world after it's done it. That particular formulation may get hairy with cases where its actions alter my preferences, but just abandoning every possible improvement in favor of my pre-existing guesses about desirable actions isn't a satisfactory answer.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T01:08:54.483Z · score: 2 (1 votes) · LW · GW
The only remaining case for awful advertising that I can see just collapses to a case for arbitrary extortion... which is just... okay you don't believe there will be open code.

So, if the advertising is there by default, that means that the advertiser is already "extorting" my attention, and has already shown a willingness to extort money from me to make the advertising go away.

More correctly, the advertiser already seems to see my attention is their property, rather than mine. If that's how they view it, the price of selling it back to me isn't going to be determined by what they make off the ads. It's going to be determined by how much they think I will pay to be left alone, at least unless I have some other leverage. If you want to call that extortion, then, fine, I believe there'll be extortion. I don't believe they'll think of themselves as engaging in extortion, though.

How would you expect to "fight it"?

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T01:02:51.367Z · score: 2 (1 votes) · LW · GW
All that's needed is a good, low-friction payment platform. We don't have one, right now, so we still see ad-funding everywhere. If BAT takes off, it'll end.

I don't know what BAT is, but I do know that we all wanted micropayments instead of an advertising-supported Internet in 1990.

Even if you have a good micropayment protocol it can be hard to get everybody enrolled. Remember, you have to enroll everybody you'd see on a city bus. That means the 12 year old kid, the homeless guy, the 85-year-old who already has trouble every time they change the coin till, and even the crazy drunk. They all have to be able to figure it out, they all have to be able to get an account, they all have to be able to fund stuff, etc.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T00:58:31.349Z · score: 3 (2 votes) · LW · GW
I can't work with this resignation to the code not being public. That would be an awful awful outcome. The cars wouldn't be able to coordinate, they'd just end up having to drive mostly like humans.

Sure they could coordinate. They'd use the ISO 27B-6 Car Coordination Protocol, which would be negotiated in a mind bogglingly boring and bureaucratic process by the representatives of the various car companies. Those companies would have big bakeoffs where they tested against each other's implementations. They would probably even hire auditors to check one another's implementations.

You could buy a copy of 27B-6 for 250 dollars or so.

The IP network we're talking over uses public protocols. Some specs are free, but you have to pay for others; you couldn't build a smart phone (legally, and including building the chips that go into it) without spending thousands of dollars for copies of standards. And a ton of the products involved have private code.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T00:53:18.123Z · score: 6 (2 votes) · LW · GW
Are people biologically capable of sitting in a theatre in front of an image of an oncoming train?

The whole reason you'd put an image of an oncoming train in a movie would be that it does stress the audience. A little stress can be fun.

I'm not so sure that people would be very comfortable with cows if cows were in the habit of running nearly silently out of nowhere and passing them 2 meters away at 40 km/h. I think after one cow did that in front of me and another one did it behind me a second or two later, while a stream of cows whooshed by on the cross street, I'd start to get pretty nervous about cows. I guess maybe I'd get used to it if I'd had years of experience to show me that cows unerringly avoided me. But I wouldn't bet too much on it.

But that's neither here nor there; I don't think the vehicles could reliably miss the pedestrians to begin with, and you seem to agree.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-11T00:45:59.464Z · score: 7 (3 votes) · LW · GW
Since the record tells us how long they were with the car, we know it wasn't them who applied the shit

Right. That's why I only said the shit-smearing would happen if the record-making werer somehow avoided. Assuming you can actually keep track of who's using it, you can deter vandalism most of the time.

You might have trouble with out-of-towners or people with nothing to lose, though. And let's not make it too simple; it's a BIG DEAL to ban somebody from the only available form of transportation... that's something you wouldn't want to see done without due process.

Comment by jbash on The end of public transportation. The future of public transportation. · 2018-02-10T15:50:41.559Z · score: 4 (2 votes) · LW · GW

I see. These vehicles can "weave through each other". Great. Can they weave through pedestrians? If they can, are pedestrians biologically capable of avoiding massive stress from being "woven through" by large fast-moving objects?

If it did work,or for any other fleet system, here are some futher predictions:

The code won't be public. People are routinely thrown in prison right now based on output from non-public code.

It will be basically impossible to go anywhere without creating a record. These records will be kept basically forever and will be accessible to more or less any powerful institution... but not to YOU, Citizen. This will most likely be used to profile you for various purposes, many of which you probably wouldn't see as in your interests. If this is somehow avoided, then the interiors of the shared vehicles will literally be smeared with shit.

Those interiors will be utilitarian (perhaps able to survive being hosed off...) and not especially comfortable.

While you're whizzing along on low friction bearings, advertising will be blaring at you. If it's possible to shut it up at all, it will cost you.

Comment by jbash on Claim: Scenario planning is preferable to quantitative forecasting for understanding and coping with AI progress · 2014-07-25T12:26:41.308Z · score: 5 (1 votes) · LW · GW

That seems like a false choice. Why wouldn't you do both?

I think you're more convincing in objecting to the quantitative approach than in defending the scenario approach. Maybe neither one is any good. So another alternative would be to do neither, and avoid the risk of convincing yourself you understand what's going to happen when you really don't.

You can still assume the things you're nearly certain of, if that's useful.

... and how do you plan to use this understanding, assuming you get any? Is it actually going to affect your actions?

Comment by jbash on What attracts smart and curious young people to physics? Should this be encouraged? · 2014-03-13T18:00:12.017Z · score: 14 (16 votes) · LW · GW

I personally find physics attractive because it's as close as you can get to a fundamental, first-principles understanding of how the Universe works. That feels like a terminal goal. Maybe it's secretly a social prestige goal, but it doesn't feel that way.

It seems kind of dark-artsish to try to change people's terminal goals for your own reasons I'm not saying that's never right, just that it seems like something to perk up and be suspicious of.

And you actually haven't even said what your reason actually are. Do you want better allocation of human resources for social goals, or something like that? Then who gets to pick the social goals? Why should anybody have to "justify the attraction" to anybody else?

Or is it individual? Do you think that studying physics will make people less happy than studying economics because they'll get more chances to apply economics?

... and what's this "real world" you're talking about? I get a nervous feeling like there's something coiled up inside that concept waiting to strike.

Comment by jbash on Tell Culture · 2014-01-19T02:38:17.945Z · score: 16 (16 votes) · LW · GW

This may be getting into private-message territory. I haven't paid enough attention to the norms to be sure. But it's easy to not read these...

your comment makes me think that avoiding ambiguity and not appropriating is not enough and perhaps even using it among ourselves is to be avoided, e.g. for the benefit of those 'looking in from the outside' who might be preemptively alienated.

I am, perhaps, "looking in from the outside". I have a lot of history and context with the ideas here, and with the canonical texts, and even with a few of the people, but I'm an extreme "non-joiner". In fact, I tend toward suspicion and distaste for the whole idea of investing my identity in a community, especially one with a label and a relatively clear boundary. I have only a partial model of where that attitude comes from, but I do know that I seem to retain an "outsider" reaction for a lot longer than other people might.

I may be hypersensitive. But I think it's more likely that I'm a not-horrible model of how a completely naive outsider might react to some of these things, even though I can express it in a Less-Wrongish vocabulary.

And of course these posts are indeed visible to people who are only vaguely exploring, or only thinking about "joining", for whatever value of "joining". This is still outreach, right?

Perhaps more accurate would've been for me to say that your original argument could have been applied to the LW-rationality approach generally, or to the bias-correcting approach based on the heuristics and biases literature.

I agree that there are a ton of things that people do all the time that don't seem very useful. If I'm not going to accept all of them, I'd better have a good reason to think this particular social-interaction issue is different.

My reason is that I don't think that epistemic rationality, or even extreme instrumental rationality, has been a critical survival skill for people until very recently (and maybe it still isn't). It's useful, but it doesn't overwhelm everything else, and indeed it seems very likely that the heuristics and biases themselves have clear advantages in many historical contexts.

On the other hand, social cooperation, and especially avoiding constant overt conflict with members of one's own society, are pretty crucial if you want to survive as a human. So I tend to expect institutions and adaptations in that area to be pretty fine-tuned and effective. I don't like a lot of the ways people behave socially, but they seem to work.

Not that strong, I know, but then I haven't seen anything that strong on any side of this.

I reserve some fair probability that there were clear differences in type between the obnoxious attempts and the successful ones, such that your experiences would not be very strong reference class evidence for e.g. Telling.

I don't think I can provide detailed descriptions, but it is definitely true that there are meaningful differences, even major differences, between most of the experiences I've had and the example approach.

The thing is that, if presented with the example approach in real life, I don't think I'd notice those differences. I think I would react heuristically to the unexpected disclosure of internal state, and provisionally put the person into the "annoying/broken" bucket before I got that far.

Then, if I weren't being very, very careful (which I can't necessarily be in all circumstances), the promise that "everything will be OK if you say no" wouldn't be believed, and might even be interpreted as confirmation that the person was going into passive-aggressive mode, and was indeed annoying/broken.

And in the particular example given, I'm being asked to have this presumptively-broken person stay in my house overnight, which is going to make me more wary.

If I were in perfect form and not distracted, I might catch other cues and escape the heuristic, but I think it would be my likely reaction most of the time.

YMMV if, for example, I have prior information that the person is an honest Teller, rather than somebody who incorrectly believes themselves to be a Teller or is just outright dishonest.

I don't have as much discipline in not applying heuristics, or in turning them off at will, as many people here. On the other hand, I have more such discipline than a lot of people... probably including some people here, and definitely including people I suspect one might wish to avoid putting off of the community, should they come exploring.

I also retain the possibility that your reaction to the approaches you disliked was overblown, though my credence for that is far lower now than it was, based on your comment and your claim to be less fazed than average by nonconventional approaches.

I could also be wrong about being less fazed. I know that many nonconventional approaches don't bother me even though they seem to bother others. That doesn't mean that I'm not unknowingly hypersensitive to these nonconventional approaches. I haven't calibrated myself systematically or overtly on them, and they do tickle personal boundary issues where I'm especially likely to be more sensitive than normal.

Have you also accounted for the potential for the negative communication approaches to stick in your mind more than ones you accepted or adopted?

Sure. That's one reason I believe I'd react negatively to the example approach. I haven't been talking about the right way to react. I've been predicting how I likely would react (and saying that I think others might react the same way).

(1) What's your general take on the picture painted by http://slatestarcodex.com/2013/05/24/going-from-california-with-an-aching-in-my-heart/

It rings true to me in a lot of ways. I usually say that I miss the Bay Area's "geekosphere". I miss what is cheesily called the "sense of possibility". I miss the easy availability of tools and resources. I miss the critical mass of people who really want to do cool, new things, whether they want to change the world, or make something beautiful, or even just make a bunch of money they're not sure how to spend. I miss the number of people who really are willing to look hard at how things work, and then change them... in the large if need be. Now that I have a kid, I really miss the wide availability of approaches to education that don't feel so much like "shove 'em in the box and make 'em like it".

On the other hand, that description sounds a little starry-eyed. I've had a bit too much contact with the "hippies" to think they're really always about peace and love, too much contact with the programmers to believe they're nearly as smart as they think they are, and too much contact with the entrepreneurs for "competent" to be the first description that comes to mind. I've also seen some people use "abandoning hangups", or "social efficiency", or whatever, as an excuse to treat others callously. You get a lot of that in the poly community, for example.

I might have missed those issues, or ignored them, 20 or 30 years ago. I might have said things about "wacky leftism" back then, too, things I wouldn't say nearly so strongly now that I know a bit more about how all the parts fit together. It's not that the leftism isn't wacky, it's that the capitalism is wacky, too.

I have not had direct contact with the "cooked" LW-rationalist community, so I can't speak to that. I was in only-somewhat-related circles, I was never very, very social, and I left the area almost 7 years ago after largely "disappearing" from those circles a year or two before that. So I can't confirm or deny what it says about that particular community.

(2) Why do you no longer spend much time in some of the communities you used to? And if you moved away from California, why?

The usual stuff: life intervened. I got busy with other stuff. I went back to work... in the Bay Area or in tech, that can be pretty consuming, and it turns out that it's harder to take the "changing the world" jobs when you're supporting other people. I got divorced. I got depressed. I had personal and romantic ties in Montreal, so I moved... and then I built a life here, with its own rewards and its own obligations and its own web of connections to people who also have reasons to be here. Moving back would be hard now.

But I do still miss it a lot.

Comment by jbash on Tell Culture · 2014-01-18T22:59:15.503Z · score: 9 (5 votes) · LW · GW

I don't have sociological statistics on that, and will have to retract "almost every culture" as a statement of fact.

My general impression is that the US and Western Europe are about as "Ask" as it gets, and in a lot of other cultures you're pretty unlikely to find any "Ask families" at all. I do know that "Offer" exists.

Comment by jbash on Tell Culture · 2014-01-18T22:30:16.891Z · score: 28 (26 votes) · LW · GW

So, as long as we're Telling, I'm going to talk about my own internal state. I think at least some aspects of my reactions may be shared by other people, including people whom readers of this thread may be interested in influencing or interacting with. Anybody who's not interested in this should definitely stop reading. I promise I won't be offended. :-)

Although I still think I had a point, if I look back at why I really wrote my response, I think that point was mostly "cover" for a less acceptable motivation. I think I really wrote it mostly out of irritation with the way the word "rationalist" was used in the original posting. And I find myself feeling the same way in response to some of your reply.

My first reaction is to see it as an ugly form of appropriation to take the word "rationalist" to mean "person identified with the Less Wrong community or associated communities, especially if said member uses jargon A, B, and C, and subscribes to only-tangentially-rational norms X, Y, and Z". Especially when it's coupled with signals of group superiority like "don't try this with Muggles" (used to be "mundanes"). It provokes an immediate "screw you" reaction.

I expressed my irritation only as hopefully-veiled but still obnoxious snark(for which I am sorry), but it was there.

The Bay Area, and presumably New York and the world, contain people who are committed to rationality by almost any definition, yet who've never read the Sequences, probably wouldn't want to, and probably have no great interest in the community I think you mean. Some of them have pretty high profiles, too. Making a land grab for the word "rationalist" probably doesn't make most of those people want into the club, and neither does name calling. Both seem more likely to make them think the club is composed of jerks.

On another, but perhaps related, front...

By my last paragraph's description of my reaction, I didn't mean to write off the "Tell" suggestion completely as a suggestion about what social norms should be, whether in a subculture or in The Wider Culture(TM). I'm pretty skeptical about the idea, but I wasn't trying to be completely dismissive there.

In that part, I was, perhaps amid more snark, trying to warn about a possibly inobvious reaction. What I was trying to describe was how I, as an individual, actually envision myself reacting to the stated tactic for introducing the "Tell" approach.

I used to spend a fair amount of time, in the Bay Area and elsewhere, with communities that overlap with, and/or could be seen as antecedents of, the Less Wrong/CFAR/MIRI "rationalists". In those communities, I met a lot of people who had unconventional approaches to interacting with others. I often found some of those people annoying and aversive. That's true even though I'm no grandmaster of "normal" social approaches myself, and even though I suspect that I am far less sensitive to deviations from them than the average bear.

What I would truly expect to go through my mind would be something like "Oh, no, yet another one of those people who think removing all filters will improve society, and want me to be part of the grand experiment"... or possibly "Oh, no, yet another one of those people who don't realize that filters are expected at all", or, worse "Oh, no, one of those people who think they can use some kind of philosophical gobbledygook to justify inconsiderate passive-aggressive pushiness". Because I've met all of those more than once.

That would cause discomfort, and in the future I'd tend to avoid the source of that discomfort. I was trying to point out was that the strategy might appear to work, but still backfire, because the immediate feedback from the interlocutor wouldn't necessarily be honest.

Maybe I'd get over it, but maybe I wouldn't, too.

For the record on your first paragraph, I'm really, really skeptical of Crocker's rules working over the long term, but I admit I've never tried them. I don't think the rest of the things you mention are similar.

I don't know of any common social norm against, say, tabooing words, or asking about anticipated experiences. I think you can use those sorts of methods with more or less anybody. You may run into resistance or anger if somebody thinks you're trying to pull a nasty rhetorical trick, but you can defuse that if you take the time to cross the inferential distance gently, and starting on the project before you're in the middle of a heated conflict where the other person will reject absolutely anything you suggest.

For that matter, you can often just quietly stop using a word without saying anything at all about "tabooing" it.

Likewise, I don't think most people mind "I'm confused"... unless it's obviously dishonest and meant to provide plausibly deniable cover to some following snark.

On the other hand, I do see lots of social norms around what tactics are and are not OK for getting somebody else to do something for you, and also around how much of your internal state you share at what stages of intimacy. So I think this is different in kind.

And of course I may also have completely misread your comment...

[On edit, cleaned up a couple of proofreading errors]

Comment by jbash on Tell Culture · 2014-01-18T14:16:21.056Z · score: 28 (34 votes) · LW · GW

Ya know, after thousands of years of trying it out in all kinds of environments, it seems as though almost every culture on Earth settles on "Guess", with maybe a touch of "Ask" in the more overbearing ones. A common modification to "Guess" is "Offer", where the mere mention of a possible opportunity to help out is treated as creating almost a positive obligation to notice the need and make a spontaneous offer.

From where I sit, that's pretty strong evidence that "Guess" or maybe "Offer" is more suited to collective human nature. There's a pretty heavy burden of proof on any "rationalist" who wants to change it.

It's also not so obvious that you can effectively change conventions like these by just starting in and asking others to change. If you tried your "developing trust" tactic with me, I'd probably play along to avoid conflict on one occasion, and avoid YOU after that.

Comment by jbash on 2013 Less Wrong Census/Survey · 2013-11-22T15:54:24.555Z · score: 13 (13 votes) · LW · GW

You're right; my error. Sorry.

Comment by jbash on 2013 Less Wrong Census/Survey · 2013-11-22T14:36:17.296Z · score: 8 (14 votes) · LW · GW

Not taken, and will not be taken as long as it demands that I log in with Google (or Facebook, or anything else other than maybe a local Less Wrong account).

Comment by jbash on New report: Intelligence Explosion Microeconomics · 2013-04-29T20:58:22.814Z · score: 8 (16 votes) · LW · GW

TL;DR.

The first four or five paragraphs were just bloviation, and I stopped there.

I know you think you can get away with it in "popular education", but if you want to be taken seriously in technical discourse, then you need to rein in the pontification.

Comment by jbash on Ritual 2012: A Moment of Darkness · 2012-12-28T14:51:39.430Z · score: 2 (2 votes) · LW · GW

Does anybody in your group have children? It doesn't seem to me that what you have in your ritual book would serve them very well. Even ignoring any possible desire to "recruit" the children themselves, that means that adults who have kids will have an incentive to leave the community.

Maybe it's just that I personally was raised with zero attendance at anything remotely that structured, but it's hard for me to imagine kids sitting through all those highly abstract stories, many of which rely on lots of background concepts, and being anything but bored stiff (and probably annoyed). Am I wrong?

Even if they could sit through it happily, there's the question of whether having them chant things they don't understand respects their agency or promotes their own growth toward reasoned examination of the world and their beliefs about it. Especially when, as somebody else has mentioned, the ritual includes stuff that's not just "rationalism". Could there be more to help them understand how to get to the concepts, so that they could have a reasonable claim not to just be repeating "scripture"?

Or am I just worrying about something unreal?

Comment by jbash on The Useful Idea of Truth · 2012-10-02T20:42:35.978Z · score: 8 (10 votes) · LW · GW

Actually, "relativist" isn't a lot better, because it's still pretty clear who's meant, and it's a very charged term in some political discussions.

I think it's a bad rhetorical strategy to mock the cognitive style of a particular academic discipline, or of a particular school within a discipline, even if you know all about that discipline. That's not because you'll convert people who are steeped in the way of thinking you're trying to counter, but because you can end up pushing the "undecided" to their side.

Let's say we have a bright young student who is, to oversimplify, on the cusp of going down either the path of Good ("parsimony counts", "there's an objective way to determine what hypothesis is simpler", "it looks like there's an exterior, shared reality", "we can improve our maps"...) or the path of Evil ("all concepts start out equal", "we can make arbitrary maps", "truth is determined by politics" ...). Well, that bright young student isn't a perfectly rational being. If the advocates for Good look like they're being jerks and mocking the advocates for Evil, that may be enough to push that person down the path of Evil.

Wulky Wilkinson is the mind killer. Or so it seems to me.