Posts

Comments

Comment by jbash on 'Good enough' way to clean an O2 Curve respirator? · 2021-09-16T16:12:27.524Z · LW · GW

It's probably silicone with maybe a little pigment in it. If you just want to clean it, you can clean it with anything that won't degrade the silicone. And not very many things will degrade silicone. You will know if you've managed to damage it because it will stiffen, crack, weaken, glaze over, or otherwise show signs of not being OK. All you care about is that it's airtight.

Regular soap. Dish soap. White vinegar.

Isopropyl alcohol might swell it a bit. It shouldn't be a problem if you don't soak it in the stuff, don't leave it on there for ages, and give it a few mintues time to evaporate before you use it or stick it in a sealed container. It's pretty effective as a disinfectant.

The bleach might do some damage, although if you just did a quick wipe with a low concentration of it, it would probably take a lot of cleanings to noticeably degrade the seal or reduce the flexibility.

Dilute hydrogen peroxide is a slow and selective disinfectant, and not much of a cleaner at all except for things it chemically reacts with. It could theoretically attack the silicone, although I bet it would take a long time at 0.5 percent, and what you quote about their tests seems to match that.

Wipe it, rather than washing it (and wipe it one or two more times with water if you need to "rinse" off whatever you're cleaning with). Make sure it's dry before you put it away or put it back on. Don't get water or cleaners on the filters. The straps might be less tolerant of some cleaners than the body.

If you wanted to thoroughly disinfect it or sterilize it, you would have more things to worry about. Notably getting into the cracks and crevices. If I had to sterilize something like that ath ome, I'd probably wipe it down with IPA, try to pick any crud out of any crevices I could get at, and then pressure cook it on a rack. But I wouldn't do it very often. And I don't think there'd really be a reason to do it at all.

HOWEVER, filters, especially high-efficiency non-woven filters that let you breathe easily, have limited lifetimes. Without valves, you're exhaling through them and getting them damp, which will degrade them faster. The O2 Web page you linked to says to replace the filters every couple of weeks "for air pollution" (which I suspect means with working valves), and daily "in clinical settings".

So if you can't get new filters, cleaning the respirator is probably not your big problem.

Comment by jbash on Kids Learn by Copying · 2021-09-06T12:28:57.790Z · LW · GW

A person who is making a significant contribution to your business deserves to be paid appropriately for that contribution, even if she is too naive or unsure of herself to ask for it. ESPECIALLY if you owe her some familial duty.

Comment by jbash on Why the technological singularity by AGI may never happen · 2021-09-03T16:57:55.290Z · LW · GW

Assuming you had a respectable metric for it, I wouldn't expect general intelligence to improve exponentially forever, but that doesn't mean it can't follow something like a logistic curve, with us near the bottom and some omega point at the top. That's still a singularity from where we're sitting. If something is a million times smarter than I am, I'm not sure it matters to me that it's not a billion times smarter.

Comment by jbash on Obesity Epidemic Explained in 0.9 Subway Cookies · 2021-08-12T17:18:34.713Z · LW · GW

You have not explained why a vast population should mystically choose to eat 0.9 more cookies per day or to replace dancing with Netflix. Yet we see that something equivalent in effect has in fact happened.

And you are assuming a deeply unrealistic model in which each person exercises unlimited long-term control not only over specific behaviors (which are in fact hard to control), but over the total ensemble of all their actions. Not only are you assuming that a person can feasibly cause themselves to eat 0.9 fewer cookies per day for many years, but that, given that they have done so, they will also somehow prevent themselves from switching from dancing to Netflix without even noticing that they have made that switch.

Comment by jbash on The Myth of the Myth of the Lone Genius · 2021-08-02T23:31:21.228Z · LW · GW

I agree that people do often make major discoveries alone. I also agree that "committees" truly could not have made many of those discoveries. But I the other thing I think is true is that they still only do it when the supporting ideas become available to them. Not just the observations, but the right ways of thinking and asking questions about those observations. Newton talked about "the shoulders of giants" and all that.

Once the conditions exist, you'll get your genius reasonably soon. There are enough geniuses out there to make things happen when the time is right.

If Einstein hadn't come up with, say, relativity, somebody else probably would have within 10 or 20 years. Maybe even a few people, who indeed might have been doing things more like "working alone and occasionally communicating", than "collaborating closely". On the other hand, I very much doubt that Einstein himself would have arrived at even Special Relativity if he'd been working 50 or 100 years earlier.

Thiel seems to be arguing against that by suggesting that the proof Fermat's Last Theorem just lay there as a "secret" for 358 years, until Wiles Heroically Challenged The Orthodoxy that refused to accept that it Could Not Be Done. I think that misstates the matter very badly, and that all the Thiel text is really unconvincing.

At least as Iunderstand the history, Wiles was indeed living in a mathematical community that was pretty discouraged about proving Fermat's Last Theorem... but nonetheless he was using a huge apparatus of number theory that had been built up over those 358-or-whatever years. Wiles didn't prove the theorem using tools that would have been available 350 years before (and nobody believes that Fermat himself ever had a correct proof). The bit Wiles filled in was the proof of the Taniyama-Shimura-Weil conjecture. To even state that conjecture, let alone prove it, you have to use a bunch of concepts to which Fermat's era had no access.

So Wiles' proof wasn't simply unnoticed for 350 years until he mystically "discovered a secret". Thiel's presentation reads as sloppy, clueless, or even dishonest, on that matter. It also seems kind of clueless on the true value of what Wiles did. Although I'm sure Wiles was very much motivated by wanting to nail Fermat's Last Theorem, the framework he developed to do that also advanced mathematics in general, and that's more important in the grand scheme of things.

As for Wiles keeping a secret, a 6-year secret is a very different matter from a 358-year secret. The field may have been demoralized enough, or maybe the solution was just truly inobvious enough, to give Wiles 6 years or more... but it wouldn't have taken another 350 years if Wiles hadn't done it. I suspect it wouldn't have taken 50 or even 20.

Also, when Wiles went public in 1993, what he had was wrong (and the theorem had a long history of false proofs at that point). It took Wiles another year to fix the problems other people found in his proof.

As for Mullis, PCR is a laboratory technique, not a sweeping framework. I don't think it puts Mullis in Wiles' league, let alone Einstein's or Newtons. And Mullis really does seem to have just mostly lucked into noticing it. I'm thinking it would more likely have been under 5 years than over 10 before somebody else came up with PCR. And I'm not entirely sure that a committee couldn't have come up with PCR given a driving application, so I think Mullis is actually a poor example.

Comment by jbash on In-group loyalty is social cement · 2021-07-06T12:54:33.592Z · LW · GW

You do realize that most salespeople who do half-million-dollar deals get the vast majority of their compensation from commissions, and would be fired outright if they ever got to the point of only drawing their salaries, right?

Comment by jbash on MIRIx Part I: Insufficient Values · 2021-06-17T11:38:41.194Z · LW · GW

I don't have an alternative, and no I'm not very happy about that. I definitely don't know how to build a friendly AI. But, on the other hand, I don't see how "corrigibility" could work either, so in that sense they're on an equal footing. Nobody seems to have any real idea how to achieve either one, so why would you want to emphasize the one that seems less likely to lead to a non-sucky world?

Anyway, what I'm reacting to is this sense I get that some people assume that keeping humans in charge is good, and that humans not being in charge is in itself an unacceptable outcome, or at least weighs very heavily against the desirability of an outcome. I don't know if I've seen very many people say that, but I see lots of things that seem to assume it. Things people write seem to start out with "If we want to make sure humans are still in charge, then...", like that's the primary goal. And I do not think it should be a primary goal. Not even a goal at all, actually.

Comment by jbash on MIRIx Part I: Insufficient Values · 2021-06-16T19:24:52.670Z · LW · GW

I don't understand why anybody would want anything that involved leaving humans in control, unless there were absolutely no alternative whatsoever.

I'm not joking or being hyperbolic; I genuinely don't get it. A lot of people seem to think that humans being in control is obviously good, but it seems really, really obvious to me that it's a likely path to horrible outcomes.

Humans haven't had access to all that much power for all that long, and we've already managed to create a number of conditions that look unstable and likely to go bad in catastrophic ways.

We're on a climate slide to who-knows-where. The rest of the environment isn't looking that good either. We've managed to avoid large-scale nuclear war for like 75 whole years after developing the capability, but that's not remotely long enough to call "stable". Those same 75 years have seen some reduction in war in general, but that looks like it's turning around as the political system evolves. Most human governments (and other institutions) are distinctly suboptimal on a bunch of axes, including willingness to take crazy risks, and, although you can argue that they've gotten better in maybe the last 100 to 150 years, a large number of them now seem to have stopped getting better and started getting worse. Humans in general are systematically rotten to each other, and most of the advancement we've gotten against that to come from probably unsustainable institutional tricks that limit anybody's ability to get the decisive upper hand.

If you gave humans control over more power, then why wouldn't you expect all of that to get even worse? And even if you could find a way to make such a situation stably not-so-bad, how would you manage the transition, where some humans would have more power than others, and all humans, including the currently advantaged ones, would feel threatened?

It seems to me that the obvious assumption is that humans being in control is bad. And trying to think out the mechanics of actual scenarios hasn't done anything to change that belief. How can anybody believe otherwise?

Comment by jbash on MIRIx Part I: Insufficient Values · 2021-06-16T19:05:58.585Z · LW · GW

Brute forcing extended high-fidelity simulations of all the humans that have ever lived in an attempt to formulate CEV will probably be too expensive for any first-generation AGI.

Prediction 1: that will never be a possibility, period, not just for a "first-generation" anything, but all the way out to the omega point. Not if you want the simulation to have enough fidelity to be useful for any practical purpose, or even interesting.

It probably won't even be possible, let alone cost effective, to do that for one person, since you'd have to faithfully simulate how the environment would interact with that person. Any set of ideas that relies on simulations like that is going to end up being useless.

Comment by jbash on Social behavior curves, equilibria, and radicalism · 2021-06-05T03:50:28.221Z · LW · GW

A radical is someone who, for many different values of X, is on the far-left or far-right of the social behavior curve for X.

“Select how radical you’ll be at random”.

I don't see why being stubborn about one value of X should have to be correlated with being stubborn about any other value of X, so I'm confused about why there would have to be capital-R "Radicals" who are stubborn about everything, as opposed to having a relatively even division where everybody is radical about some issues and not about others. Being radical can be pretty exhausting, and it seems like a good idea to distribute that workload. I mean, I'm sure that people do tend to have natural styles, but you're also talking about which style a person should consciously adopt.

Why not either randomly choose how radical you're going to be on each specific issue independent of all others, or even try to be more radical about issues where you are most sure your view of the ideal condition is correct?

How does all of this hold up when there's a lot of hysteresis in how people behave? I can think of lots of cases where I'd expect that to happen. Maybe some people just never change the random initial state of their video...

Comment by jbash on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-01T20:21:00.930Z · LW · GW

perhaps the competitive dynamic was bound to emerge anyway and it was hubristic to think ourselves so important.

Um, I think that understates the case rather vastly, and lets "you" off the hook too easily.

It was always obvious that innumerable kinds of competition would probably be decisive factors in the way things actually played out, and that any attempt to hide from that would lead to dangerous mistakes. It wasn't just obvious "pre-Musk". It was blindingly obvious in 1995. Trying to ignore it was not an innocuous error.

I'm not sure that a person who would try to do that even qualifies as a "wise" fool.

Comment by jbash on We should probably buy ADA? · 2021-05-25T01:45:28.482Z · LW · GW

I can see currencies, and I can even see identity to some degree... but what are some really practical applications of smart contracts? I always get stuck on the question of how they're useful beyond about the "multisig escrow" level. Is there some obvious "killer app" that existing institutions don't solve well, and/or that would really be accessible and meaningfully useful to people who don't have access to those institutions?

I've heard some handwaving about tracking titles in off-chain assets like land or whatever, but none that actually got down and addressed the off-chain practicalities or the many complications... and tracking titles in any way that does not bite off the complications doesn't really seem to require programmable smart contracts at all. I know there's gambling and CryptoKitties, but those don't seem like very important uses. I'm skeptical about the real utility of prediction markets, and none of them seem to have done anything very earth-shaking so far.

On "social recovery", it seems obvious to me that while people who don't participate will get screwed by losing their keys, some people who do participate will get screwed by picking the wrong "trustworthy" recovery agents. I don't think the problem of managing very powerful high-value keys has been solved at all for most users, and I'm not so sure it can be solved.

Comment by jbash on Bayeswatch 4: Mousetrap · 2021-05-19T14:05:59.995Z · LW · GW

The world of these is such a dystopia. They use AIs for such trivial purposes, and work so hard to stop anything from really changing. And it seems like they have to, because in what's apparently a very long time, they've learned nothing about how to get AI to do anything really desirable, or even not to do things that are trivially undesirable... and if they keep this up, they're going to lose in the end...

Comment by jbash on Sympathy for the ferryman of Hades, or why we should keep Trump off Twitter · 2021-05-09T23:12:02.167Z · LW · GW

I am suggesting applying things like that globally, to all users, not just users who have done something to get noticed.

Does 4chan have reply delays?

By the way, I think I'd like to amend that "response delay" thing to be "after the responding user first sees the material", rather than "after the material goes up".

Comment by jbash on Sympathy for the ferryman of Hades, or why we should keep Trump off Twitter · 2021-05-09T17:20:35.561Z · LW · GW

Well, yes, basically. Here are some sugestions for exploration. I am not saying all of these are good ideas, and some of them conflict, but they're things you could look at.

  • Don't allow responses, by which I mean replying, retweeting, liking, forwarding, or whatever, until the base material has been up for something like a couple of hours. That includes responses to responses.

  • Adjust that delay so that a response that will be seen by few users can go through faster than a response that can be seen by many users... but there should always be at least a few minutes of delay for any response that is public, goes to a very large audience, or could be in any way be forwarded to become public or go to a very large audience.

  • Limit the number of responses a user can post per hour. Put heavier limits on responders who don't generate a lot of original posts. Put still heavier limits on people with large followings.

  • Combine the above so that the delay or audience limits applied to you depend partly on how many posts or responses you generate in general.

  • Downrank prolific posters.

  • Downrank clusters of posters who frequently amplify one another, especially if nobody outside the clique seems to amplify them to the same degree.

  • Aggregate reposts of the same link or substantially the same text, and treat them as a single object that is show to each user at most once.

  • When you're ranking material to display to a user, uprank material from accounts that user follows that have fewer other followers (like family and friends) over material from accounts that have more other followers (like politicians and media). On edit: heavily uprank posts from people who reciprocally follow the reader.

  • Provide downvotes, and have them actually sink material rather than upranking it as "controversial".

  • Uprank long posts and maybe posts with multiple links... especially links to sources that do not usually appear together in other posts on the platform.

  • Uprank material that's grammatically correct.

  • Eliminate avatars and emoji

  • Consider eliminating user identity and "following" in favor of an anonymous rumor mill.

You can provide overrides for those, but they should be things that have to be selected every time you visit the site. Better yet, provide access to all the content using an API, and allow users to use clients other than the official ones... including clients that aggregate different services.

Comment by jbash on Sympathy for the ferryman of Hades, or why we should keep Trump off Twitter · 2021-05-09T12:19:21.779Z · LW · GW

Seems to me that the question of whether Trump should be banned from Twitter is a distraction.

The real question is whether Twitter should ever use the means it uses to boost "engagement", or indeed should be allowed to use those means. If you solve the Trump problem, you still haven't solved the K-pop problem.

Also, I don't get the raisin analogy. Raisins in cinnamon rolls and cookies are nasty abominations that reduce addiction. As far as I can see, that's the only reason you'd put them in there...

Comment by jbash on Less Realistic Tales of Doom · 2021-05-07T11:55:00.944Z · LW · GW

Well, it's intentionally a riff on that one. I wanted one that illustrated that these "shriek" situations, where some value system takes over and gets locked in forever, don't necessarily involve "defectors". I felt that the last scenario was missing something by concentrating entirely on the "sneaky defector takes over" aspect, and I didn't see any that brought out the "shared human values aren't necssarily all that" aspect.

Comment by jbash on Less Realistic Tales of Doom · 2021-05-07T02:46:42.929Z · LW · GW

I like these. Can I add one?

Democratic Lock-In

Once upon a time, enough humans cooperated to make sure that AI would behave according to (something encoding a generally acceptable approximation to) the coherent extrapolated volition of the majority of humans. Unfortunately, it turned out that most humans have really lousy volition. The entire universe ended up devoted to sports and religion. The minority whose volition lay outside of that attractor were gently reprogrammed to like it.

Moral: You, personally, may not be "aligned".

Comment by jbash on The Fall of Rome: Why It's Relevant, And Why We're Mistaken · 2021-04-23T13:03:42.341Z · LW · GW

Where do you get your idea of the "standard view"? I've always heard the view that internal decline, late in the process, made Rome vulnerable to invaders it would easily have repulsed in the past. In fact, I have never heard anybody claim that random "barbarians" just waltzed up and posed any threat to Rome anywhere near its peak.

Comment by jbash on Problems of evil · 2021-04-19T14:27:09.915Z · LW · GW

My personal definition for "religion" has always been roughly "the belief that values (including ethics, aesthetics, or mores, and I guess including "numinous mysterious holiness") are somehow embedded or expressed in the ontologically fundamental ordering principles of the Universe (maybe, but not necessarily, because some conscious being used such values to choose those principles)". If you believe that, you have some kind of relationship with the problem of evil. If you don't, you don't...

I'm not so sure about the "love" stuff. Personally, I've never thought about whether I was "unconditionally committed" to reality, or about whether I "loved" it in any way. I'm stuck with reality, so why would it matter? Does anybody get to that sort of thought if they don't have something like the problem of evil driving them to it?

Comment by jbash on Ranked Choice Voting is Arbitrarily Bad · 2021-04-05T15:35:21.407Z · LW · GW

You're already being tactical when you decide that Carol isn't a threat and (falsely) uprank her. What changes if you go a step further to decide that she is a threat?

In fact, I think that the standard formalism for defining "tactical voting" is in terms of submitting a vote that doesn't faitfully reflect your true preferences. Under that formalism, falsely upranking Carol is tactical, but switching back to your true preferences because of what you expect others to do actually isn't tactical.

... and it's odd to talk about tactical voting as a "downside" of one system or another, since there's a theorem that says tactical voting opportunities will exist in any voting system choosing between more than two alternatives: https://en.wikipedia.org/wiki/Gibbard–Satterthwaite_theorem . At best you can argue about which system has the worst case of the disease.

And, if you're comparing the two, plurality has a pretty bad case of tactical vulnerability, probably worse than IRV/RCV. That's why people want to change it: because tactical voting under plurality entrenches two-party systems.

Comment by jbash on Ranked Choice Voting is Arbitrarily Bad · 2021-04-05T12:22:58.332Z · LW · GW

Each cohort knows that Carol is not a realistic threat to their preferred candidate, and will thus rank her second,

... except that you have her winning the election, which means that she obviously is a realistic threat, which means you don't want to vote for her. Why wouldn't the voters all assume that everybody else was going to do the same thing they were, thus making Carol a danger?

With a single vote per person, simple plurality feels like a fair result.

I don't see why you'd say that. People are always complaining about it, and strategies for it are well known and constantly discussed every time an election comes around.

Personally I like range voting, though.

Comment by jbash on Bureaucracy is a world of magic · 2021-03-30T14:37:53.870Z · LW · GW

This could easily turn into a book on the different security models. But I don't think I have time for that, and you probably don't either, so I'll try to just respond to what you said...

Faking an ID depends only on your skills and resources.

I don't think that's a productive way to look at it at all. The skills and resources I will need depend on the countermeasures taken by the people who create the IDs... who, although they may be under cost constraints, are professionals, unlike most of the users. They are the ones who invest resources, and they do have all kinds of choices about how much to invest.

There's also the question of verifier resource investment. A store clerk deciding whether to sell you beer may just glance at the card. A cop who stops you in traffic will nearly always check everything on the ID against the issuer's database, at least if it's a "local" ID... with the definition of "local" being expanded constantly. It's a three-way verification between the card, your face, and the database. I suspect notaries in many places do the same now, and I would expect the number of such places to increase. An ID card is no longer just a piece of plastic.

So, for transactions big enough for your counterparty to bother investing in serious verification, faking the ID really becomes a matter of either faking the biometrics it uses (not so easy in person even if the biometric is just a facial photograph), or subverting the issuing system.

It's true that subverting the issuing system is a class break against all of the IDs it issues, but it's also true that finding a zero-day in code that protects keys is a class break against all of the keys protected by that code.

Also, IDs are verified by people, who can make different mistakes.

... but keys are also held by people, who can make different mistakes. And they use different ways of storing the keys.

In any case, for any particular transaction, I as an attacker don't usually get my pick of verifiers. If I want to divert the payment for your house, I have to fool the particular person who handles that payment (and then I have to move very fast to get the money out of reach before they claw back the transaction). I can't get $500,000 from an escrow agent by fooling the clerk down at 7-11.

Whereas a key is either leaked or it isn't.

Well, no, actually. I said "steal your key", but the real issue is "use your key".

Suppose you're using some kind of "hardware" key storage device (they're really hardware plus quite a bit of software). The problem for me isn't necessarily to get a copy of your key out of that device. It's enough for me to induce that device to sign the wrong transaction... which can be done by tricking either it or you. I may be in a position to do that in some circumstances, but not in others.

You don't just have one thing to defend against, either. I have a pretty broad choice of approaches to tricking you and/or the device, and my options multiply if I manage to own the general-purpose computer you've plugged the device into, let alone the device itself. You have to defend against all of my options.

If you step back further, take a timeless point of view, and look at the overall history of transactions controlling a block chain's idea of a durable asset's ownership, there are going to be a lot of keys and key holders in that history. Only one of them has to go wrong to permanently divert the asset. So there are still lots of different people to trick if I want to establish a new dynasty in the manor.

You're not necessarily the only person affected if you screw up with your key, either. Arguments based on self-reliance only go so far in deciding what kind of system everybody should be using.

What I feel like I see from "blockchain people" is this sense that keys are axiomatically safe, to the point where it's always sensible to use them for large, irrevocable transactions with no escape hatch. Even people who have personally made (or failed to make) diving catches to keep, say, Ethereum contract bugs from screwing people over, still somehow seem to maintain their gut-level faith in code as law and total trustlessness.

Frankly it feels like "just world" thinking: "Key compromise (or whatever) only happens to the clumsy and lazy (who deserve what they get). I'm not clumsy or lazy, so I'll be fine". Even if that were true, there are enough clumsy and lazy people out there to cause total system collapse in a lot of applications if you don't design around them.

I actually think that block chains are a useful tool, that they can reduce the need for trust in many applications, and that that's a very good feature. Nonetheless, the idea that they can make everything completely automatic and trustless is just not reasonable.

If we're talking about real estate titles, you might be able to use a block chain to record everything, but somebody is always going to have to be able to override the system and transfer title against the will of the listed holder, or when the listed holder has lost the relevant key. There is going to have to be a "bureaucratic" system for managing those overrides, including trust in certain authorities.

By the way, I am not saying that the sort of magical thinking mentioned in the original post doesn't exist. "Send in a scan of your ID card" is stupid 99 percent of the time. "You must make the signature using a pen" is stupid and usually based on ignorance of the law. It's just that nothing else is a magic fix either.

Comment by jbash on Bureaucracy is a world of magic · 2021-03-29T13:21:22.536Z · LW · GW

Notaries serve an extremely practical purpose: they make it harder for somebody to deny that they signed a document. They were never intended to verify the content of anybody's statement of anything.

The assurance they provide is real. It is MUCH HARDER, and more importantly MUCH RISKIER, for somebody to walk in and effectively impersonate another person than it is for them to forge a document in isolation... face mask or no face mask.

Block chains, on the other hand, can be very much about magical thinking... for example the built-in assumption that it's somehow harder for me to steal your private key than to fake your ID, or the idea that an on-chain assignment of a physical asset can somehow "enforce itself" out in the real world.

Comment by jbash on Weirdly Long DNS Switchover? · 2021-02-17T15:27:26.132Z · LW · GW

Your A records are fine, but you seem to have changed name servers. Your old NS records are probably cached all over the place; the TTL on those seems to be 48 hours. It looks like the old server (at my-tss.com) is serving the correct data now, but it was probably still serving stale data when you saw the problem. Possibly it took it a while to realize that it wasn't authoritative for the zone, or possibly there was an update problem.

Generally, it's better to do the server change and the data change separately. And you have to make sure that the new and old servers are serving the same thing through the full TTL of the old NS records, or at least have the old server definitively reconfigured not to see itself as authoritative so that it can avoid misleading other systems when it gets a query.

Evidence: If I let my system use my ISP's servers and do dig -t ns bilingadvantage.com., I get the wrong cached data:

   ; <<>> DiG 9.11.27-RedHat-9.11.27-1.fc33 <<>> -t ns billingadvantage.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38501
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: b250042072aca8b7ccde8a21602d31062c20fb70fa6f7d57 (good)
;; QUESTION SECTION:
;billingadvantage.com.          IN      NS

;; ANSWER SECTION:
billingadvantage.com.   3600    IN      NS      dns2.my-tss.com.
billingadvantage.com.   3600    IN      NS      dns1.my-tss.com.

;; Query time: 168 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Feb 17 10:06:46 EST 2021
;; MSG SIZE  rcvd: 122

... but if I go and query the actual GTLD servers, say with dig -t ns @a.gtld-servers.net billingadvantage.com, I get the right data:

; <<>> DiG 9.11.27-RedHat-9.11.27-1.fc33 <<>> -t ns @a.gtld-servers.net billingadvantage.com
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27939
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 3
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;billingadvantage.com.          IN      NS

;; AUTHORITY SECTION:
billingadvantage.com.   172800  IN      NS      ns1.zerolag.com.
billingadvantage.com.   172800  IN      NS      ns2.zerolag.com.
Comment by jbash on New Empty Units · 2021-01-27T19:04:41.331Z · LW · GW

"Make more" space? I'm pretty sure that violates general relativity or something like that.

You can make more steel and concrete, of course, and I don't think we'll run out of the raw materials for those, but we might find ourselves under water sooner if we keep cranking them out at too high a rate.

I suppose you could pay parents to make more labor for you. It's been done in the past, but I think the approach has gone out of favor these days. The lead time is pretty long, too, and the labor you get will itself need housing, so you need to make even more of it.

Also, presumably the prices of all of the above have to go up before there's a market signal to make more of them, so the price of anything else that relies on those inputs also has to rise, distorting market feedback on demand for those other items.

Seriously, money doesn't actually create anything, and there are actually finite physical resources in the world. Any political or economic system that causes physical resources to be allocated to producing physical things that don't physically benefit anybody seems like a system with a serious problem.

All you can extract from speculators is the money, not the physical stuff. The money launderers, if they exist in significant numbers, are going to be like super-speculators who provide even more money... but still no physical stuff. So they increase the misallocation. And how do you get the money from them, anyway? I mean, aren't they already naturally subsidizing construction by buying up units? Obviously they're still not solving the problem by themselves, because there are still people without housing. What's the intervention that would put more of their money toward housing, and who would be in a position to make that intervention?

One of the big knocks people always used to give against communism was that central planning would produce too many boots and not enough chewing gum, or set unreasonable quotas that forced farmers to damage the long-term productivity of the land, or whatever. Maybe that's not absolutely intrinsic to central planning, but it seems to apply to any plan to use this speculative phenomenon, no matter how you do it.

It seems to me that you really, truly only want to build housing (or anything else) if it's going to actually get used. Obviously if you truly don't have enough units for everybody to be housed even at 100 percent occupancy, then you need to build more. But you don't want either existing or newly built units sitting vacant at any higher rate than you can avoid, no matter how much money moves around as a result.

Without having researched it in enough detail to be sure, the vacancy tax idea seems like a good first step, certainly better than anything that would create more vacant units.

Comment by jbash on New Empty Units · 2021-01-27T11:39:35.753Z · LW · GW

Why do you assume that the space, materials and labor to build unlimited housing will be available? Those physical resources have to be pulled out of some other use, if they exist at all. And if they are available, that means you will burn a bunch of those resources, and cause a bunch of environmental damage, to create housing capacity that will never be used.

The significance of the launderers is that they may greatly increase the amount of such unused capacity, because they may be even less sane about prices than regular speculators.

I'm not sure that a system that burns resources because people "want" empty units (which they don't actually want for their own sake at all) is a good system. And I'm not sure what surplus you mean. Surplus money, sure, but where's the surplus wealth? Seems to me that the actual wealth is being poured down a rathole.

Comment by jbash on New Empty Units · 2021-01-26T20:19:33.321Z · LW · GW

I'm not sure that follows. If your goal is to sell the unit on to a different money launderer, that may make it "a good investment" independent of its value as a place of residence.

I mean, we have people out there buying and selling block chain attestations to the notional "ownership" of hashes of digital images, with no limitation at all on access to the actual image content. As long as something is scarce, you can apparently use it as a store or conduit of value, regardless of whether it's useful.

Comment by jbash on New Empty Units · 2021-01-26T15:44:30.756Z · LW · GW

I don't know if it's important to your argument or not, but you're not necessarily dealing with ordinary speculators.

Rumor has it that a lot of sales of vacant urban units, especially luxury units, are to (and/or among) money launderers. That means that (1) much of the value of a unit doesn't come from its utility as a place to live, but from its acting as a cover story and/or a quasi-currency-value-conduit, and (2) the buyers actually expect to take some losses. In fact, in some cases overpaying may be the whole point, because it lets the buyer transfer extra money to the seller in exchange for something outside of the visible real estate transaction. There don't have to be that many launderers out there for them to represent a big proportion of the liquidity in the market, because they tend to turn properties over a lot, and they either don't care so much about overpaying or they actively want to overpay (which then distorts the price signal for everybody else).

Even the launderers can only absorb so many units, but I imagine they may contribute to the markets staying more irrational than you'd expect for longer than you expect. Supposedly this also applies to fine art.

Comment by jbash on Technological stagnation: Why I came around · 2021-01-24T04:33:52.520Z · LW · GW

Frankly, I think the biggest cause of the "stagnation" you're seeing is unwillingness to burn resources as the world population climbs toward 8 billion. We could build the 1970s idea of a flying car right now; it just wouldn't be permitted to fly because (a) it would (noisily) waste so much fuel and (b) it turns out that most people really aren't up to being pilots, especially if you have as many little aircraft flying around from random point A to random point B as you have cars. A lot of those old SciFi ideas simply weren't practical to begin with.

... and I think that the other cause is that of course it's easier to pick low hanging fruit.

It may not be possible to build a space elevator with any material, ever, period, especially if it has to actually stay in service under real environmental conditions. You're not seeing radically new engine types because it's very likely that we've already explored all the broad types of engines that are physically possible. The laws of physics aren't under any obligation to support infinite expansion, or to let anybody realize every pipe dream.

In fact, the trick of getting rapid improvement seems to be finding a new direction in which to expand, so that you can start at the bottom of the logistic curve. You got recent improvement in electronics and computing because microelectronics were a new direction. You didn't get more improvement in engines because they were an old direction.

Your six categories are now all old directions (except maybe manufacturing, because that can mean anything at all). In 1970, you might not have included "information"... because wasn't so prominent in people's minds until a bunch of new stuff showed up to give it salience.

At the turn of the last century, you had much more of a "green field" in the all of the areas you list. You're going to have to settle for less in those areas.

And there's no guarantee that there are any truly new directions left to go in, either. Eventually you reach the omega point.

That said, I think you're underestimating the progress in some of those areas.

Manufacturing

The real cost of basically everything is way down from 1970. Any given thing is made with less raw material, less energy, and less environmental impact.

I build stuff for fun, and the parts and materials available to me are very, very noticeably better than what I could have gotten in the 1970s.

Materials are much more specialized and they are universally better. Plastic in 1970 was pretty much synonymous with "cheap crap that falls apart easily". In 2021, plastics are often better than any other material you can find. 2021 permanent magnets are in 1970s science fiction territory (and more useful than flying cars). Lubricants and sealants are vastly better. There's a much wider variety of better controlled, more consistent metal alloys in far wider use, and they are conditioned to perform better using a much wider variety of heat treatments, mechanical processing, surface treatments, etc. Things that would have been "advanced aerospace materials" in 1970 are commonplace in 2021.

Mechanisms in general are much more reliable and durable, and require much less maintenance and adjustment.

I don't believe 1970 had significant deployment of laser cutting, waterjet cutting, EDM, or probably a bunch of other process I'm forgetting about. They existed, but there were rare then, and they are everywhere now. 1970 had no additive manufacturing unless you count pottery.

It's true that there's no real change in how major bulk inputs are handled... because that stuff is really old (and was really old before 1970). There's not much dramatic improvement still available, and not even that many "tweaks".

Yeah, you don't have MNT. Although there's a lot of "invisible" improvement in the understanding of chemistry and the ability to manipulate things at small scales... and MNT was always supposed to be something that would suddenly pop up when those things got good enough. It might qualify as a "new direction", but there are no guarantees about exactly when such a direction will open up.

Construction

Construction has always been conservative and has never moved fast. Given a comparable budget, 1970 construction wasn't all that different than 1870 construction, the big exceptions being framed structure instead of post-and-beam and prefab gypsum board instead of in-place plastering.

As for 1970 to 2021, in 1970 you would have used much more wood to frame a house. Nobody used roof or floor trusses in residential construction. There was also a lot more lead and asbestos floating around... and they needed lead and asbestos, because without them their paints and insulation would not have remotely approached 2021 performance. For the most part they weren't as good even with them. There's also much wider deployment of plastic in construction (because plastic doesn't suck any more). Fasteners are better, too, or at least it's better understood which fastener to apply when and where.

I can tell at a glance that I'm not in a 1970 living room because the plugs are grounded. Also, unless it's a rich person's living room, the furniture is prefab flat pack particle board with veneer finishes instead of stick-built wood.

Agriculture

When I was a kid in the 1970s, the fresh food available in your average supermarket was dramatically less varied than it is now, and at the same time dramatically less palatable. Even the preserved food was more degraded. We actually ate canned vegetables at a significant rate.

If you didn't live through maybe the 1980s to the 1990s or early 2000s, you can't really have an idea how much better the food available to the average urban consumer has gotten.

A big part of that was better crop varieties, and I think another very big part was better management and logistics.

Energy

Energy is doing quite well, thanks, with several major, qualitative changes.

We have working renewables. Solar cells in 1970 were just plain unusable for any real purpose. Wind was a pain in the ass because of the mechanical unreliability of the generators (and was less efficient because of significantly worse turbine geometries). We're also better at not wasting so much energy.

Batteries in 1970 were absolute garbage in terms of capacity, energy density, energy per unit weight, cycle count, you name it. Primary cells were horrible, and rechargeables were worse. You simply did not use a ton of little battery-powered gadgets of any kind. That's partly because all the electrical devices we have now are much less power hungry, but it's also because batteries have actually started not to suck. People in 1970 would have looked at you like you were crazy if you suggested a cordless drill, and that has nothing to do with the efficiency of the motor. By the way, that progress in batteries is based on a crapton of major materials science advances.

Yeah, nuclear didn't happen, but that was for political reasons. One notable political issue was that fission plants are easy to use to make material for nuclear bombs. Nobody quite caught on to the whole CO2 issue until it was too late. And "nuclear homes, cars and batteries" were never a very practical idea, so it's not surprising that they haven't happened. You don't want every bozo handling fissionables... and controlled fusion for power is probably impossible at a small scale, even assuming it's possible at a large scale.

Transportation

The limits on transportation technology are energy and the Pauli exclusion principal. These are not things that you can easily change. You can't expect new transport modes because the physical environment doesn't change. You can't expect a bunch of new engine types because there are a limited number of physically possible engine types.

For actual deployed infrastructure, you have to add political limits (which are probably the main reasons you don't have much more efficiency by now)... and limits on what people want.

Doing a lot of space flight is a massive energy sink, and there is no urgent reason to waste that energy at the moment. Yes, I have heard the X-risk arguments, and no, they do not move me at all. Neither does asteroid mining. And the manifest destiny space colonization stuff sure doesn't. Maybe the people who want all that space flight are simply a minority?

Supersonic transport is also not worth it. Speeds have gone down because nobody wants to waste that much energy or deal with that much noise (or move the material to dig huge systems of evacuated tunnels).

Medicine

Yeah, it's a hard problem, see, because you have to hack on this really badly engineered system, which you're not allowed to shut down or modify.

That said, cancer isn't a single disease, and "the cure for cancer" was never going to be a thing. I think that actual medical people understood that even in 1970. There've actually been very significant advances against specific kinds of cancer. There are also improvements in prevention; screenings, HPV vaccine, whatever.

"Heart disease" isn't really a single disease, either. But there's a lot less of it around, with less impact, and not just because people stopped smoking. Even if you eat all the time and never exercise (which we're worse about than in 1970), ya got yer statins, yer much better blood pressure meds, yer thrombolytics, yer better surgery, yer better implantable devices...

Oh, and they turned around a vaccine against a relatively novel pandemic virus in under a year. They identified that virus, sequenced its genome, and did a ton of other characterization on its structure and action, in time that would definitely have sounded like science fiction in 1970. They actually know a lot about how it works... detailed chemical explanations for stuff that would, in 1970, have been handwaved at a level just about one step above vitalism.

Comment by jbash on Discovery fiction for the Pythagorean theorem · 2021-01-19T04:54:09.857Z · LW · GW

I don't think all that algebra (or symbolic arithmetic, or whatever you want to call it) is as intuitive to a high school student as it is to you. Frankly I find the "behold" proof really uncompelling, because it just leads me into, well, algebra. You're trying to prove what's fundamentally a nice geometric theorem about areas, and dragging in the question of "what we can calculate" seems like an unnatural complication. When you want to apply the theorem to get distances in the cartesian plane, then you can start calculating.

I also very much doubt that that's how the theorem was historically discovered; we're mostly talking about people whose notation for writing numbers was often really cumbersome, who totally lacked any notation at all for doing algebra, and who didn't necessarily identify areas with numbers as readily as we do.

The rearrangement proof lets you engage with the figures pretty directly, whereas all the others require you use a lot of extra concepts. Not only can you get to the Pythagorean theorem without doing algebra, and without engaging with cartesian coordinates, but you can get there without engaging with the concept of similar figures. That is a good thing; at the high school level you can't expect people to be able to manipulate all of those concepts with facility at the same time.

I think you will definitely lose them if you bring in the idea of generalizing it to other shapes. At their level, the concept of "proof" is shaky at best, and the instinct for abstraction hasn't taken hold. The idea of generalized or specialized versions of a theorem is going to be hard to explain all by itself.

Comment by jbash on Genetic Engineering: How would you change your DNA? · 2021-01-14T15:47:24.044Z · LW · GW

I don't think the question is really well formed. An ability to change my DNA sequence (with or without any of the accompanying epigenetic stuff) doesn't imply any ability to predict the effect. Somebody already pointed out that an awful lot of stuff has to be done at specific points in development. I'd like to add that we have zero idea what 95 percent of the genome is doing, and probably no complete idea of what any of it is doing.

In principal, you could come up with alterations that would cause you to "regrow" into any new form you wanted. In practice, you might have to be a god to figure out what changes to make.

If I were given the ability to modify any part of any sequence at will, but stuck with the present state of knowledge of the actual effects, I'd go do a ton of research to see if there were any clearly known predispositions to disease that I could fix without too much risk of nasty side effects.

If I answer the question in the spirit of "how would you modify yourself given the ability to make any change you wanted", I'd start by giving myself the built-in ability to make further modifications at will, without outside equipment or help.

Then I'd start throwing in "superman" stuff at whatever rate my sense of identity seemed to be comfortable with. Probably starting with better resistance to disease and damage, and better healing. Then smarter-stronger-faster, better senses, etc. Lather, rinse, and repeat until thoroughly posthuman.

Comment by jbash on Thoughts on Iason Gabriel’s Artificial Intelligence, Values, and Alignment · 2021-01-14T15:23:55.626Z · LW · GW

What would it look like to have powerful intelligent systems that increased rather than decreased the extent to which humans have agency over the future?

Um, bad?

Humans aren't fit to run the world, and there's no reason to think humans can ever be fit to run the world. Not unless you deliberately modify them to the point where the word "human" becomes unreasonable.

The upside of AI depends on restricting human agency just as much as the downside does.

You seem to be relying on the idea that someday nobody will need to protect that child from a piece of glass, because the child's agency will have been perfected. Someday the adult will be be able to take off all the restraints, stop trying to restrict the child's actions at all, and treat the child as what we might call "sovereign".

... but the example of the child is inapt. A child will grow up. The average child will become as capable of making good decisions as the average adult. In time, any particular child will probably get better than any particular adult, because the adult will be first to age to the point of real impairment.

The idea that a child will grow up is not a hope or a wish; it's a factual prediction based on a great deal of experience. There's a well-supported model of why a child is the way a child is and what will happen next.

On the other hand, the idea that adult humans can be made "better agents", whether in the minimum, the maximum, or the mean, is a lot more like a wish. There's just no reason to believe that. Humans have been talking about the need to get wiser for as long as there are records, and have little to show for it. What changes there have been in individual human action are arguably more due to better material conditions than to any improved ability to act correctly.

Humans may have improved their collective action. You might have a case to claim that governments, institutions, and "societies" take better actions than they did in the past. I'm not saying they actually do, but maybe you could make an argument for it. It still wouldn't matter. Governments, institutions and "societies" are not humans. They're instrumental constructs, just like you might hope an AI would be. A government has no more personality or value than a machine.

Actual, individual humans still have not improved. And even if they can improve, there's no reason to think that they could ever improve so much that an AI, or even an institution, could properly take all restraints off of them. At least not if you take radical mind surgery off the table as a path to "improvement".

Adult humans aren't truly sovereign right now. You have comparatively wide freedom of action as an adult, but there are things that you won't be allowed to do. There even processes for deciding that you're defective in your ability to exercise your agency properly, and taking you back to childlike status.

The collective institutions spend a huge amount of time actively reducing and restricting the agency of real humans, and a bunch more time trying to modify the motivations and decision processes underlying that agency. They've always done that, and they don't show any signs of stopping. In fact, they seem to be doing it more than they did in the past.

Institutions may have fine-tuned how they restrict individual agency. They may have managed to do it more when it helps and less when it hurts. But they haven't given it up. Institutions don't make individual adults sovereign, not even over themselves and definitely not in any matter that affects others.

It doesn't seem plausible that institutions could keep improving outcomes if they did make individuals completely sovereign. So if you've seen any collective gains in the past, those gains have relied on constructed, non-human entities taking agency away from actual humans.

In fact, if your actions look threatening enough, even other individuals will try to restrain you, regardless of the institutions. None of us is willing to tolerate just anything that another human might decide to do, especially not if the effects extend beyond that person.

If you change the agent with the "upper hand" from an institution to an AI, there's no clear reason to think that the basic rules change. An AI might have enough information, or enough raw power, to make it safe to allow humans more individual leeway than they have under existing institutions... but an AI can't get away with making you totally sovereign any more than an institution can, or any more than another individual can. Not unless "making you sovereign" is itself the AI's absolute, overriding goal... in which case it shouldn't be waiting around to "improve" you before doing so.

There's no point at which an AI with a practical goal system can tell anything recognizably human, "OK, you've grown up, so I won't interfere if you want to destroy the world, make life miserable for your peers, or whatever".

As for giving control to humans collectively, I don't think it's believable that institutions could improve to the point where a really powerful and intelligent AI could believe that those institutions would achieve better outcomes for actual humans than the AI could achieve itself. Not on any metric, including the amount of personal agency that could be granted to each individual. The AI is likely to expect to outperform the institutions, because the AI likely would outperform the institutions. Ceding control to humans collectively would just mean humans individually losing more agency... and more of other good stuff, too.

So if you're the AI, and you want to do right by humans, then I think you're going to have to stay in the saddle. Maybe you can back way, way off if some human self-modifies to become your peer, or your superior... but I don't think that critter you back off from is going to be "human" any more.

Comment by jbash on Should we postpone AGI until we reach safety? · 2020-11-21T16:45:05.048Z · LW · GW

You say 'I've been in or near this debate since the 1990s'. That suggests there are many people with my opinion. Who?

Nick Bostrom comes to mind as at least having a similar approach. And it's not like he's without allies, even in places like Less Wrong.

... and, Jeez, back when I was paying more attention, it seemed like some kind of regulation, or at least some kind of organized restriction, was the first thing a lot of people would suggest when they learned about the risks. Especially people who weren't "into" the technology itself.

I was hanging around the Foresight Institute. People in that orbit were split about 50-50 between worrying most about AI and worrying most about nanotech... but the two issues weren't all that different when it came to broad precautionary strategies. The prevailing theory was roughly that the two came as a package anyway; if you got hardcore AI, it would invent nanotech, and if you got nanotech, it would give you enough computing power to brute-force AI. Sometimes "nanotech" was even taken as shorthand for "AI, nanotech, and anything else that could get really hairy"... vaguely what people would now follow Bostrom and call "X-risk". So you might find some kindred spirits by looking in old "nanotech" discussions.

There always seemed to be plenty of people who'd take various regulate-and-delay positions in bull sessions like this one, both online and offline, with differing degrees of consistency or commitment. I can't remember names; it's been ages.

The whole "outside" world also seemed very pro-regulation. It felt like about every 15 minutes, you'd see an op-ed in the "outside world" press, or even a book, advocating for a "simple precautionary approach", where "we" would hold off either as you propose, until some safety criteria were met, or even permanently. There were, and I think still are, people who think you can just permanently outlaw something like AGI ,and that will somehow actually make it never happen. This really scared me.

I think the word "relinquishment" came from Bill McKibben, who I as I recall was, and for all I know may still be, a permanent relinquishist, at least for nanotech. Somebody else had a small organization and phrased things in terms of the "precautionary principle". I don't remember who that was. I do remember that their particular formulation of the precautionary principle was really sloppy and would have amounted to nobody ever being allowed to do anything at all under any circumstances.

There were, of course, plenty of serious risk-ignorers and risk-glosser-overs in that Foresight community. They probably dominated in many ways, even though Foresight itself definitely had a major anti-risk mission component. For example, an early, less debugged version of Eliezer Yudkowsky was around. I think, at least when I first showed up, he still held just-blast-ahead opinions that he has, shall we say, repudiated rather strongly nowadays. Even then, though, he was cautious and level-headed compared to a lot of the people you'd run into. I don't want to make it sound like everybody was trying to stomp on the brakes or even touch the brakes.

The most precautionary types in that community probably felt pretty beleaguered, and the most "regulatory" types even more so. But you could definitely still find regulation proponents, even among the formal leadership.

However, it still seems to me that ideas vaguely like yours, while not uncommon, were often "loyal opposition", or brought in by newcomers... or they were things you might hear from the "barbarians at the gates". A lot of them seemed to come from environmentalist discourse. On the bull-session level, I remember spending days arguing about it on some Greenpeace forum.

So maybe your problem is that your personal "bubble" is more anti-regulation than you are? I mean, you're hanging out on Less Wrong, and people on Less Wrong, like the people around Foresight, definitely tend to have certain viewpoints... including a general pro-technology bias, an urge to shake up the world, and often extremely, even dogmatically anti-regulation political views. If you looked outside, you might find more people who think they way you do. You could look at environmentalism generally, or even at "mainstream" politics.

Comment by jbash on Should we postpone AGI until we reach safety? · 2020-11-18T22:39:20.545Z · LW · GW

Not to speak for Dagon, but I think point 2 as you write it is way, way too narrow and optimistic. Saying "it would be rather difficult to get useful regulation" is sort of like saying "it would be rather difficult to invent time travel".

I mean, yes, it would be incredibly hard, way beyond "rather difficult", and maybe into "flat-out impossible", to get any given government to put useful regulations in place... assuming anybody could present a workable approach to begin with.

It's not a matter of going to a government and making an argument. For one thing, a government isn't really a unitary thing. You go to some *part *of a government, fight to even get your issue noticed. Then you compete with all the other people who have opinions. Some of them will directly oppose your objectives. Others will suggest different approaches, leading to delays in hashing out those differences, and possibly to compromises that are far less effective than any of the sides' "pure" proposals.

Then you get to take whatever you hashed out with the part of the government you've started dealing with, and sell it in all of the other parts of that government and the people who've been lobbying them. In the process, you find out about a lot of oxen you propose to gore that you didn't even imagine existed.

In big countries, people often spend whole careers in politics, arguing, fighting, building relationships, doing deals... to get even compromised, watered-down versions of the policies they came in looking for.

But that's just the start. You have to get many governments, possibly almost all governments, to put in similar or at least compatible regulations... bearing in mind that they don't trust each other, and are often trying either to compete with each other, or to advantage their citizens in competing with each other. Even that formulation is badly oversimplified, because governments aren't the only players.

You also have to get them to apply those regulations to themselves, which is hard because they will basically all believe that the other governments are cheating, and probably that the private sector is also cheating... and they will probably be right about that. And of course it's very easy for any kind of leader to kid themselves that their experts are too smart to blow it, whereas the other guys will probably destroy the world if they get there first.

Which brings you to compliance, whether voluntary or coerced, inside and outside of governments. People break laws and regulations all the time. It's relatively easy to enforce compliance if what you're trying to stamp out is necessarily large-scale and conspicuous... but not all dangerous AI activity necessarily has to be that way. And nowadays you can coordinate a pretty large project in a way that's awfully hard to shut down.

Then there's the blowback. There's a risk of provoking arms races. If there are restrictions, players have incentives to move faster if they think the other players are cheating and getting ahead... but they also have incentives to move if they think the other players are not cheating ,and can therefore be attacked and dominated. If a lot of the work is driven into secrecy, or even if people just think there might be secret work, then there are lots of chances for people to think both of those things... with uncertainty to make them nervous.

... and, by the way, by creating secrecy, you've reduced the chance of somebody saying "Ahem, old chaps, have you happened to notice that this seemingly innocuous part of your plan will destroy the world"? Of course, the more risk-averse players may think of things like that themselves, but that just means that the least risk-averse players become more likely first movers. Probably not what you wanted.

Meanwhile, resources you could be using to win hearts and minds, or to come up with technical approaches, end up tied up arguing for regulation, enforcing regulation, and complying with regulation.

... and the substance of the rules isn't easy, either. Even getting a rough, vague consensus on what's "safe enough" would be really hard, especially if the consensus had to be close enough to "right" to actually be useful. And you might not be able to make much progress on safety without simultaneously getting closer to AGI. For that matter, you may not be able to define "AGI" as well as you might like... nor know when you're about to create it by accident, perhaps as a side effect of your safety research. So it's not as simple as "We won't do this until we know how to do it safely". How can you formulate rules to deal with that?

I don't mean to say that laws or regulations have no place, and still less do I mean to say that not-doing-bloody-stupid-things has no place. They do have a place.

But it's very easy, and very seductive, to oversimplify the problem, and think of regulation as a magic wand. It's nice to dream that you can just pass a law, and this or that will go away, but you don't often get that lucky.

"Relinquish this until it's safe" is a nice slogan, but hard to actually pin down into a real, implementable set of rules. Still more seductive, and probably more dangerous, is the idea that, once you do come up with some optimal set of rules, there's actually some "we" out there that can easily adopt them, or effectively enforce them. You can do that with some rules in some circumstances, but you can't do it with just any rules under just any circumstances. And complete relinquishment is probably not one you can do.

In fact, I've been in or near this particular debate since the 1990s, and I have found that the question "Should we do X" is a pretty reliable danger flag. Phrasing things that way invites the mind to think of the whole world, or at least some mythical set of "good guys", as some kind of unit with a single will, and that's just not how people work. There is no "we" or "us", so it's dangerous to think about "us" doing anything. It can be dangerous to talk about any large entity, even a government or corporation, as though it had a coordinated will... and still more so for an undefined "we".

The word "safe" is also a scary word.

Comment by jbash on Legalize Blackmail: An Example · 2020-10-16T18:22:12.756Z · LW · GW

It would especially be a waste of time to copy and paste Hanson's stuff because, "Checkmate" subject lines aside, as far as I can tell, he's never posted anything that addressed anything I've said in this thread at all. And he keeps repeating himself while failing to engage properly with important objections... including clearly consequentialist ones.

We were talking about a case where the blackmailable behavior was already going on. As far as I can tell, Hanson hasn't mentioned the question of whether blackmail provides any meaningful incentive to stop an existing course of blackmailable behavior. I don't know what he'd say. I don't think it does.

If you want to broaden the issue to blackmail in general, Hanson's basic argument seems to be that having to pay blackmail is approximately as much a punishment for bad behavior as gossip would be, while the availability of cash payments would attract significantly more people who could actually administer such punishment. Therefore, if you want to see some particular behavior punished, you should permit blackmail regarding at least that kind of behavior.

He apparently takes it as given that blackmail being more available would meaningfully increase disincentives for "blackmailable" behavior, and would therefore actually reduce such behavior enough to be interesting. That's where he starts, and everything from then on is argued on that assumption. I don't see anywhere where he justifies it. He appears to think it's obvious.

I don't think it's true or even plausible.

I think Hanson probably overestimates how much legalized blackmail (and presumably more socially acceptable blackmail) would enlarge the pool of potential "enforcers". Blackmail is already somewhat hard to punish, so you'd expect formal legalization to have limited effect on its attractiveness. On the "supply side", it's much easier to act opportunistically on compromising information than it is to go into the business of speculatively trying to develop compromising information on specific targets.

But he could be right about that part, assuming a rational blackmailer. Legality has a big effect on large-scale, organized group activity. Maybe if it were legal, you really would have more organized, systematic attempts to investigate high-value targets with the idea of blackmailing them.

Where I think he's most surely wrong is on the strength of the effect that would have on the behavior of the potential blackmail-ee. I don't think that, in practice, significantly fewer people would choose to initiate blackmailable behavior.

Law, gossip, and illegal blackmail already produce most of the disincentives that legalized blackmail would. You already run a pretty large risk of anything you do being exposed. Much more importantly, it doesn't matter, because people don't react rationally or proportionately to that kind of disincentive.

People do not in fact think "I won't do this because I might be blackmailed". At the most, they may think "I won't do this because I might be found out", but they don't then break down the consequences of being found out into being blackmailed, being gossiped about, or just being thought badly of. Even noticing that something being found out might be an issue is more than most people will do. And forget about anybody consciously assigning an equivalent monetary value to being found out.

People might, maybe, think separately about the consequences of being punished criminally, but honestly I think the psychological mechanism there is much more "I don't want to see myself as a criminal", than "I don't want to get punished". People who actually commit crimes overwhelmingly don't expect to get caught, whether or not that expectation is rational. No plausible amount of blackmail-oriented investigation is going to change that expectation, any more than the already fairly large amount of government criminal investigation does.

Hanson seems to want to talk about a world of logically omniscient, rational , consequentialist actors, with unlimited resources to spend on working out the results of every alternative, and the ability to put a firm monetary value on everything. On a lot of issues, the vast majority of humans are not even approximately logically omniscient, rational, consequentialist actors, and the monetary values they assign to things are deeply inconsistent. Deviant behavior is one of the areas where people are least rational. Blackmail is all about deviant behavior, so Hanson's whole analytical framework is inapproprate from the get-go. It's probably a bit less unreasonable for modeling the conduct of blackmailers, but it's wholly wrong for modeling the behavior of blackmail-ees.

Speaking of that "rationality differential", there was also a suggestion in comments that legalized blackmail would give potential blackmailers an incentive to induce blackmailable behavior. That's pretty plausible and actually happens. It's 101 material in spy school, for example. It's often done by intentionally exploiting flaws in the target's rationality.

As far as I could see, Hanson just ignored that part of the comment. I'm tempted to put words into his mouth and imagine him saying "Why would you let a blackmailer induce you into blackmailable behavior? That's stupid.". Well, maybe it is stupid, but it happens all the time. So we have at least one unambiguously negative effect that he's totally ignored. All by itself, that one negative effect would probably create more blackmailable behavior than fear of blackmail would deter.

Hanson doesn't even concede enough to address the fact that the risk of blackmail is more manageable, and therefore possibly more appealing, than the risk of actual exposure, so a somewhat-less-extremely-unrealistic human actor might actually be more inclined to engage in "blackmailable" behavior if they expected anyone who happened to discover their behavior to blackmail them, rather than to simply expose them.

He's also talking about changes to law, but he ignores all of the factors that make legal systems gameable and load those systems up with friction. Unless forced, he more or less models the legal system as purely mechanical, and doesn't care to get into how actors actually use and abuse legal processes, nor systematic differences in various actors' sophistication, skills or resources for doing so. When negative effects based in those areas are brought up in comments, he just doubles down and suggests creating ever larger, more sweeping, systemwide legal changes essentially by fiat... which is politically impossible. He might as well suggest changing the law of gravity.

The behavior of homo economicus in a frictionless environment is simply uninteresting, and isn't quite as cut-and-dried as Hanson would have it anyway. The observed behavior of real humans suggests significant negative effects from putting his proposal into actual practice, and argues against the idea than any of his suggested benefits would be realized. And suggesting policy changes that are politically impossible is largely pointless anyway.

Personally, I'm not sure legalized blackmail would make a big difference in the world as a whole, but, if it did, I would expect it to be far more negative than positive. I would expect it to create a world with more snares, more fear, much more deliberately induced bad behavior, significantly more system gaming of all kinds, while not deterring much if any pre-existing bad behavior. Hanson hasn't said anything that changes that impression.

I will now go back to my usual policy of ignoring Hanson completely...

Comment by jbash on Legalize Blackmail: An Example · 2020-10-15T21:07:17.788Z · LW · GW

SM only had to have Zahara arrested to tarnish his personal reputation and prevent whistleblowing. Future whistleblowers can see what happened to Zahara and will choose not to come forward.

Well, OK, but in a counterfactual world with legal blackmail, other failure modes would surely show up. For example, if blackmail were legal, a target would have an easier time casting doubt on a whistleblower by claiming that the whole accusation was made up for financial reasons. Or finding something legitimately embarrassing and, well, blackmailing the whistleblower into silence. All completely legal, and much easier for a more powerful party to pull off than for a less powerful party.

For that matter, you could see other ways to chill legal process specifically. I have no idea what grounds this guy sued on to begin with, but I doubt it was airtight. You might get legal advice like "You have an iffy case suing over this, so why don't you just blackmail them?". You could even imagine a court saying "Well, you don't have standing because you didn't take a reasonably available option to remedy the harm to you, namely blackmailing the defendant".

... and if you just want to frame somebody for any old crime to discredit them, it's easy to substitute something else for blackmail. Very possibly even something else directly related to the case at hand.

Treating this case as an argument for legal blackmail seems like a weird cherry-picked argument that relies on very specific and unusual facts.

We do know that, if blackmail were legal, we would have better information about which world we are in.

Maybe in this case, but if blackmail were legal, there would, in general, be an incentive to monetize negative information about various actors, rather than publicizing it. How does that lead to more information?

So far as I can tell, if blackmail were legal...

  1. There would be some tiny additional risk to targets beyond their already large risk of being exposed. Very likely not enough risk to deter a significant amount of malfeasance before the fact.

  2. Financial incentives would convert some whistleblowers into blackmailers. Whereas a whistleblower may force some bad activity to stop, a blackmailer is really only guaranteed to collect money because of their knowledge.

I want to emphasize that... I don't see where blackmail would give ever anybody any incentive to actually stop any malfeasance.

A halfway competent blackmailer will make the demand more attractive (to the decision maker) than the consequences of exposure. Assuming the blackmailer could convince the target that the demand was supportable and that no insupportable future demands would be made, you'd expect the target to pay. And in a world with legal blackmail, the target would presumably have legal process available to keep the blackmailer from making extra demands later, so that "trust" would be easier to get (this system is really starting to look like a paradise for crazy legal games).

Once having paid, and thereby eliminated the immediate risk of exposure, the target would have no reason not to simply keep going with the bad behavior. If anything, the target might be hungrier for cash to pay ongoing blackmail.

The target's risk analysis after the blackmail is pretty much the same as it was before the blackmail. Obviously the target started the malfeasance in the first place, and has continued it thereafter, so that analysis must favor the malfeasance.

Of course, the blackmailer could make stopping part of the demand... but that demands an altruistic choice to demand "pay me and stop" rather than "pay me more". Even then, it only works if the target believes that the blackmailer can and will actually find out about any failure to stop.

Comment by jbash on Postmortem to Petrov Day, 2020 · 2020-10-04T02:59:11.619Z · LW · GW

Would it also be reasonable for a user to expect that the administrator of a site would not expose it to being shut down by some random person, if the administrator did not see the matter as a game?

Comment by jbash on Postmortem to Petrov Day, 2020 · 2020-10-04T02:54:02.126Z · LW · GW

Indeed, last year I know of a user (not Chris Leong) who visited the site, clicked the red button, and entered and submitted codes, before finding out what the button did.

As a result of that user, this year we changed the red button to the following, so that mistake would not happen again.

If I showed up cold (not as a person who'd actually been issued codes and not having advance knowledge of the event), and saw even the 2020 button with "Click here to destroy Less Wrong", it would never cross my mind that clicking it would actually have any effect on the site, regardless of what it says.

I'd assume it was some kind of joke or an opportunity to play some kind of game. My response would be to ignore it as a waste of time... but if I did click on it for some reason, and was asked for a code, I'd probably type some random garbage to see what would happen. Still with zero expectation of actually affecting the site.

Who would believe what it says? It's not even actually true; "destroy" doesn't mean "shut down for a day".

The Web is full of jokey "red buttons", and used to have more, so the obvious assumption is that any one you see is just another one of those.

Comment by jbash on AGI safety from first principles: Control · 2020-10-04T00:43:35.346Z · LW · GW

Why would you expect it to be "us" versus "the AI" (or "the AIs")? Where's this "us" coming from?

I would think it would be very, very likely for humans to try to enlist AGIs as allies in their conflicts with other humans, to rush the development and deployment of such AGIs, to accept otherwise unacceptable risks of mistakes, to intentionally remove constraints they'd otherwise put on the AGIs' actions, and to give the AGIs more resources than they'd otherwise get. It's not just that you can't rely on a high level of coordination; it's that you can rely on a high level of active conflict.

There'll always be the assumption that if you don't do it first, the other guy will do it to you. And you may rightly perceive the other guy as less aligned with you than the AGI is, even if the AGI is not perfectly aligned with you either.

Of course, you could be wrong about that, too, in which case the AGI can let the two of you fight, and then mop up the survivors. Probably using the strategy and tactic module you installed.

Comment by jbash on Hiring engineers and researchers to help align GPT-3 · 2020-10-02T13:40:14.500Z · LW · GW

it would be much better if we had an API that was trying to help the user instead of trying to predict the next word of text from the internet.

"I'm from OpenAI, and I'm here to help you".

Seriously, it's not obvious that you're going to do anything but make things worse by trying to make the thing "try to help". I don't even see how you could define or encode anything meaningfully related to "helping" at this stage anyway.

As for the bottom line, I can imagine myself buying access to the best possible text predictor, but I can't imagine myself buying access to something that had been muddied with whatever idea of "helpfulness" you might have. I just don't want you or your code making that sort of decision for me, thanks.

Comment by jbash on What should I teach to my future daughter? · 2020-06-19T16:25:33.782Z · LW · GW

I suggest that you relax a bit. She's not going to be learning programming or anything like it for years, regardless. Newborns spend months to years just learning how to use their own limbs and process the information from their own senses.

And I've never heard any evidence at all that, say, programming, is particularly important to learn all that early in life. Manual/mental skills like musical performance seem to turn out best if started early (but not necessarily as a toddler!). Languages, too. I could even imagine that critical logical thinking would benefit from early exposure. But programming? That's something you can figure out.

In the long run, meta-skills are important... things that let you decide for yourself which skills to learn and learn them on your own. And things that let you evaluate both the truth and the usefulness of all the stuff that everybody else is trying to teach you. Beyond that, the more flexible and generalizable the better.

But the biggest thing is this: she's going to be her own person. By the time she's old enough to be taught the kinds of hands-on skills you're talking about, she's going to have her own ideas about what she wants to learn. "This civilization" isn't some kind of apocalyptic dystopia, and you don't know "what is coming". In all probability, it will all add up to normality. In all probability, she will muddle through. ... and in all probability, neither you nor anybody here can guess what very specific skills she's going to need. Assuming, that is, that human skills are even relevant at all when she grows up.

Please don't drive her insane by pushing "needed practical skills". Let her enjoy life, and let her learn by doing things that engage her. While you're unlikely to make a monster impact by predicting what she'll need in the future, you will definitely have an impact on her present, and maybe on how she sees learning in general..

Comment by jbash on On “COVID-19 Superspreader Events in 28 Countries: Critical Patterns and Lessons” · 2020-04-29T20:32:19.140Z · LW · GW

Um, direction of airflow, by definition, doesn't affect the ballistic transmission of anything. On the other hand, the longer something hangs in the air, the more it's affected by the direction of airflow, and that applies all the way down to gas molecules.

Singing or breathing hard seems likely to increase droplets of all sizes right down to submicron.

Comment by jbash on How credible is the theory that COVID19 escaped from a Wuhan Lab? · 2020-04-04T15:12:15.165Z · LW · GW

Because it is likely to:

  1. Damage international relations and cooperation in the middle of a pandemic. You have US Senators out there pushing this thing. That's going to offend the Chinese government. At the absolute least, it will distract people from cooperating.
  2. Cause another wave of anti-Asian, and specifically anti-Chinese, racist attacks. Such attacks happened even when everybody thought the whole thing was an accident. If you make them believe it was deliberate (on edit: they will believe this even if the rumor is that it was an accident, and there's still a big problem if they only believe it was careless), they will definitely do it more.

In short, providing oxygen to rumors like this makes them more credible and more available to idiots. Idiots are predictable elements of the world, and you can reasonably anticipate their responses to the conditions you create.

Comment by jbash on How credible is the theory that COVID19 escaped from a Wuhan Lab? · 2020-04-04T02:58:07.445Z · LW · GW
  1. This is not particularly credible.

  2. It's also not particularly important.

  3. Even if it were 100 percent true, it would be what I believe Less Wrong likes to call an "infohazard". Unless you want to literally get people killed, you don't want to spread this stuff.

Comment by jbash on Bogus Exam Questions · 2020-03-28T14:16:48.225Z · LW · GW

Erm, the students are not expected to understand the math, and are not being tested on their understanding of the math. The professor doesn't understand the math either. I mean that there is epsilon chance that any given psychology professor, especially an educational psychology professor, has ever heard the phrase "sigma algebra". If they have, it's because they're math hobbyists, not because it's ever come up in their professional work.

In a psychology course, "runs a multiple regression" means "follows a specific procedure analogous to a computer program". The whole thing is a black box. The decision about when it's valid to use that procedure is made based on various rules of thumb, which are passed around mostly as folklore, and are themselves understood and followed with varying degrees of conscienciousness. The same applies to the question of what the results mean.

It's absolutely a valid criticism that people in fields like psychology tend to misapply statistical methods and misunderstand statistical results. They do need an intuitive understanding of what they're doing, good enough to know when they can apply the method and what the results actually show. And it's true that most of them probably don't have that understanding.

On the other hand, it's also true that you don't need to understand the math at a very deep level to use the techniques practically. They don't need to be able to derive the method from first principals, nor to be able to rigorously prove that everything works, nor to recognize pathological corner cases that will literally never be encountered in real applications. Those are unreasonable things to ask. Remember that their goal is to understand psychology, not to understand mathematics.

Students in all kinds of science and engineering are allowed to use instruments that they couldn't build themselves, and practitioners still more so. They're not expected to understand every possible corner-case limitation of those instruments, either. At most they're given some rules for when they can or can't use and rely on an instrument.

It's still a really lame question, though, and the fact that it's asked does show a problem. Nobody seems to be looking for even an intuitive grasp of all the stuff that's lurking in that word "expect".

Comment by jbash on [HPMOR] Harry - example or anti-example? · 2020-02-25T13:27:16.061Z · LW · GW

Chapter 122, paragraph beginning with "And right now, Harry James Potter-Evans-Verres was still a walking catastrophe"... and the stuff immediately preceding and following it. Seems like a pretty direct answer to your question.

Comment by jbash on [Review] On the Chatham House Rule (Ben Pace, Dec 2019) · 2019-12-12T18:47:48.201Z · LW · GW

I don't think Apple is a useful model here at all.

I'm pretty sure secrecy has been key for Apple's ability to control its brand,

Well, Apple thinks so anyway. They may or may not be right, and "control of the brand" may or may not be important anyway. But anyway it's true that Apple can keep secrets to some degree.

and it's not just slowed itself down,

Apple is a unitary organization, though. It has a boundary. It's small enough that you can find the person whose job it is to care about any given issue, and you are unlikely to miss anybody who needs to know. It has well-defined procedures and effective enforcement. Its secrets have a relatively short lifetime of maybe as much as 2 or 3 years.

Anybody who is spying on Apple is likely to be either a lot smaller, or heavily constrained in how they can safely use any secret they get. If I'm at Google and I steal something from Apple, I can't publicize it internally, and in fact I run a very large risk of getting fired or turned in to law enforcement if I tell it to the wrong person internally.

Apple has no adversary with a disproportionate internal communication advantage, at least not with respect to any secrets that come from Apple.

The color of the next iPhone is never going to be as interesting to any adversary as an X-risk-level AI secret. And if, say, MIRI actually has a secret that is X-risk-level, then anybody who steals it, and who's in a position to actually use it, is not likely to feel the least bit constrained by fear of MIRI's retaliation in using it to do whatever X-risky thing they may be doing.

Comment by jbash on [Review] On the Chatham House Rule (Ben Pace, Dec 2019) · 2019-12-12T18:34:39.081Z · LW · GW

MIRI's written about going non-disclosed by default. I expect you to think this is fine and probably good and not too relevant, because it's not (as far as the writeup suggests) an attempt to keep secrets from the US government, and you expect they'd fail at that. Is that right?

No, I think it's probably very counterproductive, depending on what it really means in practice. I wasn't quite sure what the balance was between "We are going to actively try to keep this secret" and "It's taking too much of our time to write all of this up".

On the secrecy side of that, the problem isn't whether or not MIRI's secrecy works (although it probably won't)[1]. The problem is with the cost and impact on their own community from their trying to do it. I'm going to go into that further down this tome.

And OpenAI is attempting to push more careful release practises into the overton window of discussion in the ML communities (my summary is here). [...] For example, there are lots of great researchers in the world that aren't paid by governments, and those people cannot get the ideas [...]

That whole GPT thing was just strange.

OpenAI didn't conceal any of the ideas at all. They held back the full version of the actual trained network, but as I recall they published all of the methods they used to create it. Although a big data blob like the network is relatively easy to keep secret, if your goal is to slow down other research, controlling the network isn't going to be effective at all.

... and I don't think that slowing down follow-on research was their goal. If I remember right, they seemed to be worried that people would abuse the actual network they'd trained. That was indeed unrealistic. I've seen the text from the full network, and played with giving it prompts and seeing what comes out. Frankly, the thing is useless for fooling anybody and wouldn't be worth anybody's time. You could do better by driving a manually created grammar with random numbers, and people already do that.

Treating it like a Big Deal just made OpenAI look grossly out of touch. I wonder how long it took them to get the cherry-picked examples they published when they made their announcement...

So, yes, I thought OpenAI was being unrealistic, although it's not the kind of "romanticization" I had in mind. I just can't figure out what they could have stood to gain by that particular move.

All that said, I don't think I object to "more careful release practices", in the sense of giving a little thought to what you hand out. My objections are more to things like--

  1. Secrecy-by-default, or treating it as cost-free to make something secret. It's impractical to have too many secrets, and tends to dilute your protection for any secrets you actually do truly need. In the specific case of AI risk, I think it also changes the balance of speed between you and your adversaries... for the worse. I'll explain more about that below when I talk about MIRI.

  2. The idea that you can just "not release things", without very strict formal controls and institutional boundaries, and have that actually work in any meaningful way. There seems to be a lot of "illusion of control" thinking going on. Real secrecy is hard, and it gets harder fast if it has to last a long time.

To set the frame for the rest, I'm going to bloviate a bit about how I've seen secrecy to work in general.

One of the "secrets of secrecy" is that, at any scale beyond two or three people, it's more about controlling diffusion rates than about creating absolute barriers. Information interesting enough to care about will leak eventually.

You have some amount of control over the diffusion rate within some specific domains, and at their boundaries. Once information breaks out into a domain you do not control, it will spread according to the conditions in that new domain regardless of what you do. When information hits a new community, there's a step change in how fast it propagates.

Which brings up next not-very-secret secret: I'm wrong to talk about a "diffusion rate". The numbers aren't big enough to smooth out random fluctuations the way they are for molecules. Information tends to move in jumps for lots of reasons. Something may stay "secret" for a really long time just because nobody notices it... and then become big news when it gets to somebody who actively propagates it, or to somebody who sees an implication others didn't. A big part of propagation is the framing and setting; if you pair some information with an explanation of why it matters, and release it into a community with a lot of members who care, it will move much, much faster than if you don't.[2]

So, now, MIRI's approach...

The problem with what MIRI seems to be doing is that it disproportionately slows the movement of information within their own community and among their allies. In most cases, they will probably hurt themselves more than they hurt their "adversaries".

Ideas will still spread among the "good guys", but unreliably, slowly, through an unpredictable rumor mill, with much negotiation and everybody worrying at every turn about what to tell everybody else [3]. That keeps problems from getting solved. It can't be fixed by telling the people who "need to know", because MIRI (or whoever) won't know who those people are, especially-but-not-only if they're also being secretive.

Meanwhile, MIRI can't rely on keeping absolute secrets from anybody for any meaningful amount of time. And they'll probably have a relatively small effect on institutions that could actually do dangerous development. Assuming it's actually interesting, once one of MIRI's secrets gets to somebody who happens to be part of some "adversary" institution, it will be propagated throughout that institution, possibly very quickly. It may even get formally announced in the internal newsletter. It even has a chance of moving on from there into that first institution's own institutional adversaries, because they spy on each other.

But the "adversaries" are still relatively good at secrecy, especially from non-peers, so any follow-on ideas they produce will be slower to propagate back out into the public where MIRI et al can benefit from them.

The advantage the AI risk and X-risk communities have is, if you will, flexibility: they can get their heads around new ideas relatively quickly, adapt, act on implications, build one idea on another, and change their course relatively rapidly. The corresponding, closely related disadvantage is weakness in coordinating work on a large scale toward specific, agreed-upon goals (like say big scary AI development projects).

Worrying too much about secrecy throws away the advantage, but doesn't cure the disadvantage. Curing the disadvantage requires a culture and a set of material resources that I don't believe MIRI and friends can ever develop... and that would probably torpedo their effectiveness if they did develop them.

By their nature, they are going to be the people who are arguing against some development program that everybody else is for. Maybe against programs that have already got a lot of investment behind them before some problem becomes clear. That makes them intrinsically less acceptable as "team players". And they can't easily focus on doing a single project; they have to worry about any possible way of doing it wrong. The structures that are good at building dangerous projects aren't necessarily the same as the structures that are good at stoppping them.

If the AI safety community loses its agility advantage, it's not gonna have much left.

MIRI will probably also lose some donors and collaborators, and have more trouble recruiting new ones as time goes on. People will forget they exist because they're not talking, and there's a certain reluctance to give people money or attention in exchange for "pigs in pokes"... or even to spend the effort to engage and find out what's in the poke.

A couple of other notes:

Sometimes people talk about spreading defensive ideas without spreading the corresponding offensive ideas. In AI, that comes out as wanting to talk about safety measures without saying anything about how to increase capability.

In computer security, it comes out as cryptic announcments to "protect this port from this type of traffic until you apply this patch"... and it almost never works for long. The mere fact that you're talking about some specific subject is enough to get people interested and make them figure out the offensive side. It can work for a couple of weeks for a security bug announcement, but beyond that it will almost always just backfire by drawing attention. And it's very rare to be able to improve a defense without understanding the actual threat.

Edited the next day in an attempt to fix the footnotes... paragraphs after the first in each footnote were being left in the main flow.


  1. As for keeping secrets from any major government...
    .
    First, I still prefer to talk about the Chinese government. The US government seems less likely to be a player here. Probably the most important reason is that most parts of the US government apparatus see things like AI development as a job for "industry", which they tend to believe should be a very clearly separate sphere from "government". That's kind of different from the Chinese attitude, and it matters. Another reason is that the US government tends to have certain legal constraints and certain scruples that limit their effectiveness in penetrating secrecy.
    .
    I threw the US in as a reminder that China is far from the only issue, and I chose them because they used to be more interesting back during the cold war, and perhaps could be again if they got worried enough about "national security".
    .
    But if any government, including the US, decides that MIRI has a lot of important "national security" information, and decides to look hard at them, then, yes, MIRI will largely fail to keep secrets. They may not fail completely. They may be able to keep some things off the radar, for a while. But that's less likely for the most important things, and it will get harder the more people they convince that they may have information that's worth looking at. Which they need to do.
    .
    They'll probably even have information leaking into institutions that aren't actively spying on them, and aren't governments, either.
    .
    But that all that just leaves them where they started anyway. If there were no cost to it, it wouldn't be a problem. ↩︎

  2. You can also get independent discoveries creating new, unpredictable starting points for diffusion. Often independent discoveries get easier as time goes on and the general "background" information improves. If you thought of something, even something really new, that can be a signal that conditions are making it easier for the next person to think of the same thing. I've seen security bugs with many independent discoveries.
    .
    Not to mention pathologies like one community thinking something is a big secret, and then seeing it break out from some other, sometimes much larger community that has treated it as common knowledge for ages. ↩︎

  3. If you ever get to the point where mostly-unaffiliated individuals are having to make complicated decisions about what should be shared, or having to think hard about what they have and have not committed themselves not to share, you are 95 percent of the way to fully hosed.
    .
    That sort of thing kind of works for industrial NDAs, but the reason it works is that, regardless of what people have convinced themselves to believe, most industrial "secret sauce" is pretty boring, and the rest tends to be either so specific and detailed that it obviously covered by any NDA. AND you usually only care about relatively few competitors, most of whose employees don't get paid enough to get sued. That's very different from some really inobvious world-shaking insight that makes the difference between low-power "safe" AI and high-power "unsafe" AI. ↩︎

Comment by jbash on [Review] On the Chatham House Rule (Ben Pace, Dec 2019) · 2019-12-10T14:01:51.113Z · LW · GW

I guess this is sort of an agreement with the post... but I don't think the post goes far enough.

Whoever "you guys" are, all you'll do by adopting a lot of secrecy is slow yourselves down radically, while making sure that people who are better than you are at secrecy, who are better than you are at penetrating secrecy, who have more resources than you do, and who are better at coordinated action than you are, will know nearly everything you do, and will also know many things that you don't know.

They will "scoop" you at every important point. And you have approximately zero chance of ever catching up with them on any of their advantages.

The best case long term outcome of an emphasis on keeping dangerous ideas secret would be that particular elements within the Chinese government (or maybe the US government, not that the corresponding elements would necessarily be much better) would get it right when they consolidated their current worldview's permanent, unchallengeable control over all human affairs. That control could very well include making it impossible for anyone to even want to change the values being enforced. The sorts of people most likely to be ahead throughout any race, and most likely to win if there's a hard "end", would be completely comfortable with re-educating you to cure your disharmonious counter-revolutionary attitudes. If they couldn't do that, they'd definitely arrange things so that you couldn't ever communicate those attitudes or coordinate around them.

The worst case outcome is that somebody outright destroys the world in a way you might have been able to talk them out of.

Secrecy destroys your influence over people who might otherwise take warnings from you. Nobody is going to change any actions without a clear and detailed explanation of the reasons. And you can't necessarily know who needs to be given such an explanation. In fact, people you might consider members of "your community" could end up making nasty mistakes because they don't know something you do.

I've spent a lot of my career on the sorts of things where people try to keep secrets, and my overall impression of the AI risk and X-risk communities (including Nick Bostrom) is that they have a profoundly unrealistic, sometimes outright romanticized, view of what secrecy is and what it can do for them (and an unduly rosy view of their prospects for unanimous action in general).