Posts

Passable Puppet 2022-05-29T11:07:37.907Z

Comments

Comment by burmesetheater (burmesetheaterwide) on Are long-form dating profiles productive? · 2022-06-29T23:36:46.435Z · LW · GW

If you had trouble finding a partner

You know, one can find a desirable partner after having had trouble finding one. Just finding a parter is not very hard as XX. Please think more carefully about what has (and hasn't) been said before strawmanning. 

Comment by burmesetheater (burmesetheaterwide) on Are long-form dating profiles productive? · 2022-06-29T19:22:48.715Z · LW · GW

The post originally had several positive karma then got downvoted. The need for "epistemic legibility" is noted.

you haven't spoken with XX who have trouble finding a desireable partner

Haven't spoken with? Who said I'm not in this category lol

Comment by burmesetheater (burmesetheaterwide) on It’s Probably Not Lithium · 2022-06-28T22:47:55.556Z · LW · GW

More plausible model for why in present day so many are overweight:

--cheap calories that taste good widely available with very low effort to obtain

--tasty food is, other things equal, an easy exploit to reward / motivation loops, so it tends to get used in exactly this way which results in excess calorie consumption and of course this is habit forming. there is probably also a lower threshold to "get into" food vs something else in this class like drugs since eating is already universal and not taboo or otherwise particularly regulated. 

--fewer obligatory opportunities for caloric expenditure to balance intake possibly mostly as a result of modern transport and trend toward less need for physical labor in general

Maybe this is off-base, and it may not apply to the lithium hypothesis, but it seems there are a lot of really implausible ideas for why obesity is common which are motivated by a desire to not blame the obese person. Common perceptions of agency might infer that the above model blames the consumer, but the intention is exactly the opposite; since it's predictable that humans in the above environment will tend to act this way, who can blame them? 

Comment by burmesetheater (burmesetheaterwide) on Air Conditioner Repair · 2022-06-28T18:06:10.888Z · LW · GW

First bullet, those are good points. It is an interesting question how one would good data on this sort of thing and how accurate that data would be. 

Second, this isn't the intention, it's to show that the story sounds bizarre. It's not a political comment. 

Comment by burmesetheater (burmesetheaterwide) on Air Conditioner Repair · 2022-06-27T22:27:34.937Z · LW · GW

What am I supposed to do now? Chargeback?

If you want your money back, sure. The alternative is to fight a company experienced at not giving refunds.

As for warning the community, this kind of thing happens all over the place all the time in all kinds of industries. Complaints to BBB and Yelp tend to be famously ineffective although possibly will demonstrate good citizenship to those who don't know better. Overall, this post is a bit confusing--it's like someone from a completely different society was suddenly transported to modern USA. What are you asking / telling us? 

Comment by burmesetheater (burmesetheaterwide) on Limits of Bodily Autonomy · 2022-06-27T21:09:46.665Z · LW · GW

This is fundamentally misframed. For example, there's no reason not to support--in some cases--mandatory abortion if you support mandatory vaccination. The main benefits of abortion aren't to the user, they're to the potential conscious entity who mercifully wasn't forced to endure a predictably sub-par life and to society. Abortion isn't really about personal (bodily) autonomy, that's just a useful political expedient. 

edit: is this being downvoted because people think it's anti-abortion? To put this comment in more context,  it's assumed that abortion has great utility for reducing S-risk (the mundane kind, if that's a reserved AI danger term) and is also associated with positive social trends. With this in mind, if you compare abortion to vaccination, it makes sense to mandate abortions in at least some cases. It shouldn't matter, but if it's still not clear I am very pro abortion. 

Comment by burmesetheater (burmesetheaterwide) on AI Training Should Allow Opt-Out · 2022-06-27T20:58:39.296Z · LW · GW

once the cat is out of the bag it's out

Since this was not clear, that's correct. The intention is not to encourage non-contribution to the open internet, including open source projects.

It is a problem in 2022 when someone seriously proposes opt-out as a solution to anything. Our world does not "do" opt-out. Our concept of "opting out" of the big-data world is some inconsequential cookie selection with a "yes" and a buried "no" to make the user feel good. We are far past the point of starting conversations. It's not productive, or useful, when it's predictably the case that one's publicly accessible data will end up used for AI training by major players anyway, many of whom will have no obligation to follow even token opt-out and data protection measures.

Conversations can be good, but founding one on a predictably dead-end direction does not seem to make much sense.

This isn't a suggestion to do nothing, it's a suggestion to look elsewhere. At the margin, "opting out" does not affect anything except the gullible user's imagination. 

Comment by burmesetheater (burmesetheaterwide) on Are long-form dating profiles productive? · 2022-06-27T20:30:32.345Z · LW · GW

Productive for what, exactly? There's a lot of assumed context missing from the post, including your gender, and the gender you're targeting. It's also not completely clear what kind of relationship you want, but we'll assume it's serious and long-term.

First: you're XY, looking for XX. In this case, @swarriner's post is applicable to most of the distribution. But since you're here, we'll assume the girl you're looking for is intellectually gifted, data oriented, and may or may not be slightly on the spectrum. Even in this case, pictures are still worth 1000 words, but a lengthy profile probably won't hurt (it may not help that much, though.) If you're going for someone in the bulk of the distribution, a long profile will most likely hurt, not help. In short, make sure you have good pictures, and don't rely on your own judgement or that of biased parties to assess whether the pictures are good.

Second: You're XY looking for XY. In this case a long profile is probably pretty useful, but your pictures still need to be good.

Third: NB for one, the other or both. In this case a long description is probably generally useful. Don't know enough about this case. 

Fourth: You're XX looking for anything. A long profile isn't necessary, just some pictures and a short signal that you're smart and nerdy. The pictures don't need to be that good. 

edit: what went wrong here? why is this controversial? anyone can explain? 

Comment by burmesetheater (burmesetheaterwide) on AI Training Should Allow Opt-Out · 2022-06-23T17:45:12.714Z · LW · GW

Disagree. Data public (and private) will be used by all kinds of actors under various jurisdictions to train AI models and only a fraction of these predictably will pay any heed to an opt-out (and only a fraction of those who do may actually implement it correctly). So an opt-out is not only a relatively worthless token gesture, the premise of any useful upside appears to be based on the assumption that one can control what happens to information one has publicly shared on the internet. It's well evidenced that this doesn't work very well.

Here's another approach: if you're worried about what will happen to your data, then maybe do something more effective, like not put it out in public.

Comment by burmesetheater (burmesetheaterwide) on Against Active Shooter Drills · 2022-06-17T22:32:15.452Z · LW · GW

If your response to that idea is ‘what, what, that sounds horrible and terrifying and we should absolutely positively not do that’ then you seem like a normal human to me.

Or maybe it's dull, boring and dumb like most other things in school. How you perceive the threat of mass shootings, or anything else, is not one-size-fits-all. School tends to be a ways down on the list of one's influences at any age and if one's dearer influences consider shootings to be a very unlikely cause of problems to one's health, as is objectively the case, one might simply think the school is making a silly waste of time...business as usual.

So maybe a more direct problem is parents and other influences who may or may not be distributed unequally by political beliefs, who promote the idea that shooting is a direct threat to the life and limb of some individual. Does this include the OP?

To generalize this problem, the world is stuffed with terrifying threats, and would-be threats that tend to be a problem to process serenely at any age. Who is responsible? Maybe humans who "decide" to create new humans practically autonomously as a result of a biological process rewarding fitness to reproduce above practically all else. 

Comment by burmesetheater (burmesetheaterwide) on Alignment Risk Doesn't Require Superintelligence · 2022-06-15T07:07:46.033Z · LW · GW

Destructive alignment issues in our species are more mundane. Several leaders in the 20th century killed outright very large numbers of people for completely banal reasons like political ambition. Actually, your intuition that 9/11 events happen "all the time" is only off in a temporal sense; the number of humans unambiguously killed by the coordinated actions of relatively few other unaligned humans in the last 100 years is so great that it is probably enough to have at least one 9/11 a day during that time. Humans are generally unaligned on several levels from personal to egregoric and the only reason this is lately becoming a problem in a species-risk sense is because only now are we getting some powerful technology. A more probable version of the scenario in this post is a suicidal leader triggering a large scale nuclear war through use of their own arsenal either through deception or after taking steps to reduce the possibility of refusal. Of course it would be a great irony if now that global thermonuclear war is actually tested, the opposing forces are unable to make use of their deterrent.

Comment by burmesetheater (burmesetheaterwide) on Yes, AI research will be substantially curtailed if a lab causes a major disaster · 2022-06-15T05:39:54.019Z · LW · GW

and have the best forecasters

With forecasters from both sides given equal amounts of information, these institutions might not even reliably beat the Metaculus community. If one is such a great forecaster then they can forecast that jobs like this might not be, among other things, that fulfilling.

I don't know if we've gotten to the point where they can fool the professionals at not getting fooled

Quite a few professionals (not at not getting fooled) still believe in a roughly 0% probability of a certain bio-related accident a couple three years ago thanks in large part to a spun story. Maybe the forecasters at the above places know better but none of the entities who might act on that information are necessarily incentivized to push for regulation as a result. So it's not clear it would matter if most forecasters know AI is probably responsible for some murky disaster while the public believes humans are responsible. 

Comment by burmesetheater (burmesetheaterwide) on Yes, AI research will be substantially curtailed if a lab causes a major disaster · 2022-06-14T23:13:24.668Z · LW · GW

Well, there's a significant probability COVID isn't a "natural" pandemic, although the story behind that is too complicated without an unambiguous single point of failure which hinders uptake among would-be activists.

If there's an AI failure will things be any different? There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it's a good bet that however interested "the powers that be" are in GOF, they're probably much MUCH more interested in AI development. So there can be even more resources to spin the story in favor of forestalling any pressure that might build to regulate.

Nuclear regulation also might not be a good example of a disaster forcing meaningful regulation because the real pressure was against military use of nuclear power and that seems to have enjoyed general immunity against real regulation. So it's more like if an AI incident results in the general public being banned from buying GPUs or something while myriad AI labs still churn toward AGI. 

Comment by burmesetheaterwide on [deleted post] 2022-06-14T03:43:37.373Z

Anyone can try, this seems way out in a practically invisible part of the tail of obstacles to not being destroyed by AGI, if it's even an obstacle at all. 

Comment by burmesetheater (burmesetheaterwide) on Why so little AI risk on rationalist-adjacent blogs? · 2022-06-13T16:43:02.661Z · LW · GW

Most probably just haven't identified it as salient / don't understand it / don't take it seriously, and besides there tend to be severely negative social / audience ramifications associated with doomsday forecasting. 

Comment by burmesetheater (burmesetheaterwide) on How are compute assets distributed in the world? · 2022-06-12T22:37:24.295Z · LW · GW

One way to maybe shed some light on this is to sort the latest Top500 results by location (maybe with extra work to get the specific locations inside the country, if required). There is a very long tail but most of it should correlate with investment in top infrastructure. Of course certain countries (US, China) might have undeclared computing assets of significant power (including various private datacenters), but this probably doesn't change the big picture much. 

Comment by burmesetheater (burmesetheaterwide) on How much stupider than humans can AI be and still kill us all through sheer numbers and resource access? · 2022-06-12T22:29:14.475Z · LW · GW

A stupid AI that can generate from thin air things that have both useful predictive power and can't be thought of by humans, AND that can reliably employ the fruits of these ideas without humans being suspicious or having a defense...isn't that stupid. This AI is now a genius.

What might an irate e-chimp do if their human handler denied it a banana?

Who cares? For one, if we're talking about an AI and not a chimp em this is an obvious engineering failure to create something with all the flaws of an evolved entity with motivational pressures extraneous and harmful to users. Or in other words this is a (very) light alignment problem that can be foreseen and fixed. 

Comment by burmesetheater (burmesetheaterwide) on How much stupider than humans can AI be and still kill us all through sheer numbers and resource access? · 2022-06-12T21:37:16.928Z · LW · GW

How much real power does the AI have access to, and what can humans do about it?

To reframe your question, even relatively small differences in human intelligence appear to be associated with extraordinary performance differences in war: consider the Rhodesian Bush War, or the Arab-Israeli conflict. Both sides of each conflict are relatively well-supplied and ideologically motivated to fight. In both cases there is also a serious intellectual giftedness gap (among other things) between the competing populations and the more intelligent side is shown to win battles easily and with very lopsided casualties--although in the case of Rhodesia the more intelligent population eventually lost the war due to other externalities associated with the political realities of the time.

If humanity is aligned, and the less-smart-than-human AI doesn't have access to extraordinary technology or other means to grant itself invulnerability and / or quickly kill a critical mass of humans before we can think our way around it, it should be the case that humans win easily. It is difficult to imagine a less-intelligent-than-human AI reliably obtaining such hedges without human assistance. 

Comment by burmesetheaterwide on [deleted post] 2022-06-12T20:49:38.947Z

So, to be clear, you don't think confidently naming people by first name as destroying the world can be parsed emotionally by them?

Mentions of AI companies / AI personalities on LW will intrinsically tend to be adversarial, even if the author spares a polemic or use of terms like "so and so is working to destroy the world" because misaligned AI destroying the world is clearly THE focus of this community. So it can be argued that to be meaningful, a policy of no names would need to be applied to practically any discussion of AI as even if some AI content is framed positively by the author, the community at large will predictably tend to see it in existential risk terms.

That's one issue. Personally, the calculus seems pretty simple: this well-behaved community and its concerns are largely not taken seriously by "the powers" who will predictably create AGI, there is little sign that these concerns will be taken seriously before reaching AGI, and there is almost no reason to think that humanity will pause to take a break and think "maybe we should put this on hold since we've made no discernible progress toward any alignment solutions" before someone trains and runs an AGI. So a conclusion that could be drawn from this is, we might as well have nice uncensored talks about AI free from petty rules until then.

Comment by burmesetheaterwide on [deleted post] 2022-06-12T18:15:39.813Z

This seems like a case of making a rule to fix a problem that doesn't exist.

Are people harassing individual AI labs or researchers? The tendency for reasonable people who are worried about AI safety should be to not do so, since it predictably won't help the cause and can hurt. So far there does not seem to be any such problem of harassment discernible from background noise.

Naming individual labs and / or researchers is interesting, useful, and keeps things "real." 

Comment by burmesetheater (burmesetheaterwide) on Why don't you introduce really impressive people you personally know to AI alignment (more often)? · 2022-06-11T22:46:54.501Z · LW · GW

A conventional approach might lead one to consider that inside the LW / AI safety bubble it borders on taboo to discount the existential threat posed by unaligned AI, but this is almost an inversion of the outside world, even if limited to to 25/75 of what LW users might consider "really impressive people."

This is one gateway to one collection of problems associated with spreading awareness of AI alignment, but let's go in a different direction: somewhere more personal.

Fundamentally, it seems a mistake to frame alignment as an AI issue. While unaligned AGI appears to be rapidly approaching and we have good reasons to believe this will probably result in the extinction of our species, there is another, more important alignment problem that underlies, and somewhat parallels the AI alignment problem. Of course, this larger issue is the alignment problem as faced by humanity at large.

Humans are famously unaligned on many levels: with respect to the self, interpersonally, and micro / macro-socially. No good solution to any tier of this problem has been discovered over thousands of years of inquiry. In the 20th century, humans developed technology useful for acquiring a great deal of information about the universe beyond our world, and "coincidentally" our capability of concentrated destruction increased in effectiveness by orders of magnitude, to the scale where killing at least large portions of the species in a short time is plausible. Thus, the question of why we don't see others like us even though there appears to be ample space tended to find answers along the lines of intelligent life destroying itself. Of course, this is the result of an alignment "problem."

Dull humans forecasted that nuclear arms would end the world and slightly smarter humans suggested that we might wait for antimatter, nanotech, genetically engineered pathogens or some other high-impact dangerous technology. As we're seeing now, these problems are difficult. What appears to be less difficult is AGI.

So, even though it's not in the interest of the continuity of the species, humanity can't help but to race redundantly at breakneck pace toward this new technological capability, embodying a slightly disguised, concentrated and lethal version of one of the oldest and most fundamental problems our species has ever faced. That AI alignment is not taken more seriously could be seen as a reflection of "really impressive people" actually not having paid much mind to the alignment problems embedded in and endemic to who we are.

Should one introduce really impressive people to AI alignment? Maybe, but one must remember that magic appears unavailable and that for various reasons, it is predictably the case that most people, even "really impressive" people, will not consider the problem to be more than an abstract curiosity with even the best presentation. So to evangelize about AI alignment seems most useful as a fulfillment of one's personal / social interests rather than much of a useful tool to increase work to save the species.

Full disclosure: it's not clear that alignment is a meaningful concept, it's not clear that humans have meaningful or consistent values, it's very much not clear that continuing the human species is a good thing (at any point in our history, past, present or future) from an S-risk perspective, and it's not clear that humans have any business rationally evaluating the utility in survival and reproduction as these are goals we're apparently optimized for. So it should be the case that this post is written with less motivation to evangelize. 

Comment by burmesetheater (burmesetheaterwide) on How dangerous is human-level AI? · 2022-06-10T21:38:21.798Z · LW · GW

"human-level AI" is a confusing term for at least a couple reasons: first, there is a gigantic performance range even if you consider only the top 1% of humanity and second it's not clear that human-level general learning systems won't be intrinsically superhuman because of things like scalable substrate and extraordinarily high bandwidth access (compared to eyes, ears, and mouths) to lossless information. That these apparent issues are not more frequently enumerated in the context of early AGI is confusing. 

As far as I'm aware all serious attempts to take over the world have been by brute force. Historically there are messaging, travel, logistics etc latencies that make this very difficult within one's lifetime even if potentially world-owning force is available or capable of being mustered. So the window for a single entity (human-level) to take over the world within its lifetime has probably only opened recently, and the number of externalities and internal abilities needed to line up to have a predictably large shot at success are probably many. Accordingly, even situations like Hitler sitting in control of a very powerful Reich which nominally might appear to enable a chance of world ownership are still too fraught with an unoptimized distribution of enabling factors to have any realistic chance of world ownership. There is also a grey area of whether an individual or some collective is responsible for the attempt. One might argue that trends ongoing for at least a few decades suggest that the USA is in a great position to take over the world if China (or someone else) doesn't "break out" first. But with the way the USA is structured it may be difficult for any "human-level" individual entity to take credit for, or enjoy a firm grasp of the fruits of this conquest. 

Comment by burmesetheater (burmesetheaterwide) on Why it's bad to kill Grandma · 2022-06-09T23:48:22.021Z · LW · GW

There seems to be a deep problem underlying these claims: even if humans have loosely aligned intuition about what's right and wrong, which isn't at all clear, why would we trust something just because it feels obvious? We make mistakes on this basis all the time and there are plenty of contradictory notions of what is obviously correct--religion, anyone?

Further, if grandma is in such a poor state that simply nudging her would kill her AND the perpetrator is such a divergent individual that they would then use the recovered funds to improve others' lives (which might have many positive years still available, if such a concept is possible or meaningful) then one might argue that it seems a poor conclusion to NOT kill grandma if one's concern is for the welfare of others. One could also simply steal grandma's money, since this is probably easier to get away with than murder, but then you would be leaving an ethical optimization on the table by not ending grandma's life, which as hinted at earlier is probably in the negative side of qualitative equity.

Comment by burmesetheater (burmesetheaterwide) on Is there any way someone could post about public policy relating to abortion access (or another sensitive subject) on LessWrong without getting super downvoted? · 2022-06-07T02:43:20.959Z · LW · GW

Writing about politics isn't discouraged because of sensitivity, but because political positions tend to be adopted for bad epistemological reasons, have poor predictive power and little to do with rationality. Correspondingly, framing a topic politically is a good indicator that the author has resorted to poor argumentation and is very unlikely to update their views based on superior argument or evidence, which is a little annoying and not less wrong. These are general problems not limited to discussing politics but for politics it's especially bad.

Comment by burmesetheater (burmesetheaterwide) on Revisiting "Why Global Poverty" · 2022-06-01T21:58:47.412Z · LW · GW

I'm far more skeptical of the "governments have this covered" position than I was in 2015. Some of this is for theoretical reasons (ex: preventing catastrophe benefits people beyond your country) and some of it is from observing governments more (ex: pandemic response).

This is an interesting response to the perceived folly of trusting that our authorities can handle a cosmic body appearing on track to collide with the planet as there would seem to be more fundamental issues at play: that many such bodies may be unidentified, including due to long period orbits, that we generally lack proven technology to send such a body off-target depending on warning time, mass, velocity, etc, and that we're really not working that hard on any solutions as far as one can see. A damning example is the apparent lack of development toward nuclear pulse propulsion as this would seem an obvious and accessible tool to deal with such risks.

Comment by burmesetheater (burmesetheaterwide) on Probability that the President would win election against a random adult citizen? · 2022-06-01T21:50:24.725Z · LW · GW

Random person most likely has an IQ of 100 which is a standard deviation or two below a random president. Random person most likely has less talent and experience at politics, promotion and attack than a politician. Random person is most likely less physically attractive, less charismatic, and less wealthy than a politician. Random person doesn't have a gigantic support apparatus behind them (although even if they do they're probably still screwed because it's not enough to make up for the rest of the deficiencies). As others say it won't even be close. Random person will predictably be slaughtered. 

Comment by burmesetheater (burmesetheaterwide) on What will happen when an all-reaching AGI starts attempting to fix human character flaws? · 2022-06-01T21:43:41.988Z · LW · GW

What is the question? It seems to have something to do with AGI intervening in personality disorders, but why? AGI aside, considering the modification of humans to remove functionality that's undesirable to oneself it's not at all clear where one would stop. Some would consider human existence (and propagation) to be undesirable functionality that the user is poorly equipped to recognize or confront. Meddling in personality disorders doesn't seem relevant at this stage.

Comment by burmesetheater (burmesetheaterwide) on What is the state of Chinese AI research? · 2022-06-01T19:41:53.918Z · LW · GW

A few reflections on a tragically wrong comment:

  1. What does what I think matter? Make argument, don't invoke myself if not necessary.
  2. It seemed obvious why the analysis is biased, but maybe this isn't the case, and maybe more info should have been provided. Mostly the concern here is over wording like "Xi seems to be doing his level best to wreck the Chinese high-tech economy" and "shortsighted national-security considerations" and "Uighur oppression" and (to paraphrase) maybe their leader is insane enough to invade Taiwan. To have all of these pop up in a single paragraph that's supposed to be about AI raises some red flags. Does it need to be explained why? 
  3. Calling the analysis superficial without explicitly justifying this is problematic, particularly as the response is even more superficial.

Yesterday I saw that gwern's response was heavily upvoted but didn't understand why; maybe it is part of a mechanism to keep people below a certain intellectual threshold off the site. 

Comment by burmesetheater (burmesetheaterwide) on What is the state of Chinese AI research? · 2022-05-31T19:47:45.811Z · LW · GW

Probability is high that all nations with strong AI research are keeping secrets, since some AI research will naturally go into projects with high secrecy. A better question is what the proportion of published to secret research is in USA, China, etc. It might actually be similar, which could suggest that China is pretty far behind. 

Comment by burmesetheater (burmesetheaterwide) on What is the state of Chinese AI research? · 2022-05-31T19:42:52.182Z · LW · GW

This part of the analysis is both biased and extremely superficial. It may also be correct, but one might give low credence at face value. 

Comment by burmesetheater (burmesetheaterwide) on What to do when starting a business in an imminent-AGI world? · 2022-05-13T19:21:31.501Z · LW · GW

You can't account for AGI because nobody has any idea at all what a post-AGI world will look like, except maybe that it could be destroyed to make paperclips. So if starting a business is a real calling, go for it. Or not. Don't expect the business to survive AGI even if thrives pre-arrival. Don't underestimate that your world may change so much that scenarios like you (or an agent somewhat associated with the entity formerly known as you, or even anyone else at all) running a business might not make sense--the concept of business is a reflection of how our world is structured. Even humans can unbalance this without the help of AGI. In short it's a good bet that AGI will be such a great disruption that the patent system is more likely to be gone rather than filled with AGI patent trolls. 

Comment by burmesetheater (burmesetheaterwide) on You get one story detail · 2022-04-05T08:36:06.694Z · LW · GW

Which category does this story fit into? 

Comment by burmesetheater (burmesetheaterwide) on Ukraine Post #8: Risk of Nuclear War · 2022-04-05T07:35:30.077Z · LW · GW

losing all the friends it has left with the possible exception of Iran

To be pedantic, they also wouldn't very likely wouldn't lose Syria or North Korea. 

Comment by burmesetheater (burmesetheaterwide) on Greyed Out Options · 2022-04-05T04:59:41.110Z · LW · GW

In any moment, you have literally millions of options.

Has anyone actually made an attempt to calculate possible degrees of freedom for a human being at any instant? There are >millions of websites that could be brought up in those tabs alone. 

Comment by burmesetheater (burmesetheaterwide) on Why learn to code? · 2022-04-05T04:56:31.155Z · LW · GW

If you're into information then learning to code can help you acquire more information more easily and process that information in beautiful ways that could be laborious or impractical otherwise. That's probably the simplest explanation with the broadest appeal. At the risk of downvotes (maybe there are a lot of professional coders here), I'm not sure why anyone would want a job coding because then you risk the fun aspect for someone else's purposes in exchange for some tokens and quite a lot of your time. 

Comment by burmesetheater (burmesetheaterwide) on What an actually pessimistic containment strategy looks like · 2022-04-05T03:01:10.913Z · LW · GW

Taking for granted that AGI will kill everybody, and taking for granted that this is bad, it's confusing why we would want to mount costly, yet quite weak, and (poorly) symbolic measures to merely (possibly) slow down research.

Israel's efforts against Iran are a state effort and are not accountable to the law. What is proposed is a ragtag amateur effort against a state orders of magnitude more powerful than Iran. And make no mistake, AGI research is a national interest. It's hard to overstate the width of the chasm. 

Even gaining a few hours is pretty questionable, and a few hours for a billion people may be a big win or it might not. Is a few seconds for a quadrillion people a big win? What happens during that time and after? It's not clear that extending the existence of the human race by what is mostly a trivial amount of time even in the scope of a single life is a big deal even if it's guaranteed.

There is also a pretty good chance that efforts along the lines described may backfire, and spur a doubling-down on AGI research.

Overall this smells like a Pascal's scam. There is a very, very low chance of success against a +EV of debatable size. 

Comment by burmesetheater (burmesetheaterwide) on Avoiding Moral Fads? · 2022-04-04T06:23:05.935Z · LW · GW

How are we to know that we aren't making similar errors today?

Based only on priors, the probability we aren't is very low indeed. A better question is, given an identified issue, how can change happen? One main problem is that contra-orthodox information on moral issues tends not to travel easily.  

Comment by burmesetheater (burmesetheaterwide) on Book review: Very Important People · 2022-04-03T01:27:58.381Z · LW · GW

This isn't really much different from life outside the club. Social forces are often not aligned with majority personal preference and can even be in conflict. For example, people want to make friends or hook up but seeking those goals explicitly tends to be perceived as low-class and / or strange. 

Comment by burmesetheater (burmesetheaterwide) on Interacting with a Boxed AI · 2022-04-02T20:23:03.437Z · LW · GW

I'm not sure considering how to restrict interaction with super-AI is an effective way to address its potential risks, even if some restrictions might work (and it is not at all clear that such restrictions are possible). Humans tend not to leave capability on the table where there's competitive advantage to be had so it's predictable that even in a world that starts with AIs in secure boxes there will be a race toward less security to extract more value.

Comment by burmesetheater (burmesetheaterwide) on Russian x-risk newsletter March 2022 update · 2022-04-02T19:48:06.114Z · LW · GW

If the US knew of a way to locate subs, then it would worry that Russia or China would figure it out, too

There are many conceivable ways to track subs and this is only part of the problem because subs still need to be destroyed after being located. Russia and China combined don't have enough nuclear attack subs to credibly do this to the US. The US does have enough nuclear attack subs to credibly destroy Russia's deterrent fleet, if they can be tracked, with attack subs left over to defend our own ballistic missile subs. A primary mission for nuclear attack subs is to shadow nuclear ballistic missile subs. That Russia is (allegedly) developing weapons like Status-6 and Burvestnik suggests they are not satisfied with the ongoing deterrent capability they already have.

Also, about 2 thirds of the US's 1357 "strategic" (capable of incinerating the heart of a major city) nuclear warheads are currently on subs, rather than in missile silos or on bombers

The number of weapons deployed, and where they are deployed simply isn't verifiable. Keep in mind that it is widely held, and codified in public law, that use of nuclear weapons by the US including for retaliatory purposes must follow the kind of centralized authorization that could be extremely difficult to guarantee under a surprise nuclear attack. This would open us up to surprise decapitation attack so the probability it's true in practice is very low. 

Comment by burmesetheater (burmesetheaterwide) on What would make you confident that AGI has been achieved? · 2022-03-30T20:50:32.897Z · LW · GW

at what point would you expect the average (non-rationalist) AI researcher to accept that they’ve created an AGI?

Easy answers first: the average AI researcher will accept it when others do.

at what point would you be convinced that human-level AGI has been achieved?

When the preponderance of evidence is heavily weighted in this direction. In one simple class of scenario this would involve unprecedented progress in areas limited by things like human attention, memory, io bandwidth, etc. Some of these would likely not escape public attention. But there are a lot of directions AGI can go.

Comment by burmesetheater (burmesetheaterwide) on Formal epistemiology for extracting truth from news sources · 2022-03-20T20:01:12.201Z · LW · GW

To the extent that there are believers, you won't change their mind with reason, because their beliefs are governed, guarded and moderated by more basic aspects of the brain--the limbic system is a convenient placeholder for this.

So a problem you are focused on is that minority (or majority) of individual opinions are prevented from being honestly expressed. Flipping a small number of individual opinions, as is your motivation, does not address this problem.

Comment by burmesetheater (burmesetheaterwide) on Wargaming AGI Development · 2022-03-20T00:21:10.522Z · LW · GW

Because the benefits of quantum computing were so massive

Please elaborate. I'm aware of Grover's algorithm, Shore's algorithm, and quantum communication, and it's not clear that any of these pose a significant threat to even current means of military information security / penetration.

Comment by burmesetheater (burmesetheaterwide) on Formal epistemiology for extracting truth from news sources · 2022-03-17T10:06:01.313Z · LW · GW

I'm interested if there were any attempts at formal rules of transforming media feed into world model. Preferably with Bayesian interference and cool math. So I can try to discuss these with my friends and maybe even update my own model.

So you are interested in changing other people's minds on a complicated issue that has more to do with the limbic system than rational hardware by using reason. This distribution of influence is one reason why their intelligence isn't really important here, and it is also why your strategy won't work.

More generally, you are in a trap. Be skeptical of your own motivations. The least worst course of action available is probably to disengage.

Comment by burmesetheater (burmesetheaterwide) on Danger(s) of theorem-proving AI? · 2022-03-16T22:10:04.533Z · LW · GW

Realistically, a complexity limit on practical work may not be imposed if the AI is considered reliable enough and creating proofs too complex to otherwise verify is useful, and it's very difficult to see a complexity limit imposed for theoretical exploration that may end up in practical use.

Still in your scenario the same end can be met with a volume problem where the ratio of new AI-generated proofs with important uses is greater than external capability of humans to verify, even in the case that individual AI proofs are in principle verifiable by humans, possibly because of some combination of enhanced productivity and reduced human skill (possibly less incentive to become skilled at proofs if AI seems to do it better). 

Comment by burmesetheater (burmesetheaterwide) on Danger(s) of theorem-proving AI? · 2022-03-16T03:38:49.583Z · LW · GW

AI becomes trusted and eventually makes proofs that can't otherwise be verified, makes one or more bad proofs that aren't caught, results used for something important, important thing breaks unexpectedly. 

Comment by burmesetheater (burmesetheaterwide) on Who is doing Cryonics-relevant research? · 2022-03-16T03:14:10.172Z · LW · GW

I do not feel entirely comfortable talking the whole thing over with my profs.

If you're going to take a 3 month internship they will all know about anyway, it can't hurt to talk about it, right? Cryonics isn't really that taboo, especially if, as it appears, you will take the position that you don't expect current methods to work (but you would like to see about creating ones that might).