Posts

Comments

Comment by frontier64 on What will 2040 probably look like assuming no singularity? · 2021-05-23T15:12:49.466Z · LW · GW

Mandated Gene Therapy

We're trending towards health and medical decisions being looked at from a societal perspective rather than on the individual level.[1] . People who use alternative medicine are increasingly shamed not only for the effect their choice has on their own health, but for the effect it has on the health of others and the financial burden it puts on the medical system.[^2] Medical interventions later on are more costly therefore those 4 months you tried on herbal remedies hurt everybody who has to pay for your medical treatment. Refusing a vaccine not only increases burden the medical system will have taking care of you, but increases the risk that others will also get infected.

Gene therapy, specifically editing the genes of newborns, is the archetypal preventative medical procedure. Parents who have a baby they know will more than likely have a genetic disease and likely be an extra burden on the medical system will be shamed for that decision and the solution will be gene therapy.

That shame will be turned into laws. The natural extension of gene therapy laws for preventing known high likelihoods of genetic mutation will be gene therapy to prevent speculative risk and then just possible risk.

Privately-Owned Nukes

Honestly this doesn't even require improvements in nuclear tech. The only necessary ingredient is a couple of smart people joining a terrorist organization that wants to cause mass destruction and has the disposable resources of a small business. The design of nuclear bombs is freely available online, the actual engineering process is more arcane, but still learnable. The hardest part of the process is acquiring enough weapons grade uranium or plutonium. But even those can be made from scratch with access to a mine (even though spy movies always focus on the terrorist's stealing their nuclear material). So my first lemma is that even though it hasn't happened yet, it's pretty easy for a small group to create a nuclear bomb.

What's been holding private nuke construction back is a lack of impetus and general ineffectiveness of terrorists. But that's not a real bar to the end result. Over time there likely will be a statistical outlier terrorist organization that has a few smart people and the desire to construct nuclear bombs. And for them it will be easy.


  1. Taxpayer funded healthcare is the norm. Politicians talk about the opiod crisis and blame doctors for over-prescribing, people protest drug companies because they raise prices too high, a few national and international organizations have been setting the global policy on infectious disease handling for over a year now ↩︎

Comment by frontier64 on What will 2040 probably look like assuming no singularity? · 2021-05-18T09:41:53.778Z · LW · GW

The constant improvements in nuclear tech will lead to multiple small terrorist organizations possessing portable nuclear bombs. We'll likely see at least a few major cities suffering drastic losses from terrorist threats.

Gene therapy will be strongly encouraged in some developed nations. Near the same level of encouragement as vaccines receive.

Pollution of the oceans will take over as the most popular pressing environmental issue.

Comment by frontier64 on Your Cheerful Price · 2021-02-15T06:01:25.830Z · LW · GW

I think many people view friendship as a form of alliance. Ally friends perform favors for each other as a way to tie tighter bonds between them and signal that their goals are aligned. I want to bake you a cake for exactly $0 because baking a cake will help you and I want what's best for you so helping you directly helps me. So in the future, after I bake you your cake, you of course will drive me to the airport because that would help me and you want what's best for me right? It's not a direct scratch-my-back-and I'll-scratch-yours exchange of favors, it's developing a strong alliance between our interests. We can then rely on that alliance for mutual assistance in the future. The two most common danger ally-friends are on the lookout for are 1) over-reliance by their friend; and 2) mere burden shifting from their friend.

  1. Over-reliance is when Bob always asks his lawyer friend Alice for legal advice and for her opinion on complicated topics. Alice spends hours of her time (that she could otherwise use to bill $400/hour) on these favors yet Bob doesn't provide her even half of the value that she gives him. Bob's reliance on Alice is still efficient, it's much easier for her to do the legal research than him, but Bob is not putting in enough to get what Alice is giving him. Alice will eventually grow resentful of Bob and stop doing favors for him entirely.

  2. Burden shifting is when Alice and Bob are both friends of equal cooking ability yet Alice still asks Bob to cook her cakes. The amount of effort expended by either to make the cake is exactly the same so Alice having Bob cook is no more efficient for the alliance than if she cooked the cake herself. Bob notices this and asks why Alice doesn't cook the cake herself. If Alice can convince him that somehow it is more efficient for Bob to cook the cake the alliance can continue. If Bob can't be convinced he will stop cooking cakes because why the hell was he even cooking them in the first place?

But attempts to pay an ally friend for their favors is a whole other unexpected issue that can even seem like betrayal. Ally friends would dislike your way of offering them money in exchange for a favor because that would imply that when they seek a favor from you, you would expect money in return! Then to them there never was any alliance between you at all. From their perspective, you offering them money in exchange for a favor is tantamount to admitting that you were actually just pretending to be their friend the whole time.

Comment by frontier64 on How do I improve at being strategic? · 2021-02-07T01:46:46.343Z · LW · GW

I'm glad you appreciate the advice. It seems to me that you've developed a very effective, structured way to improve your productivity and I'm going to try to emulate your strategy here with a few upcoming projects I have to work on and see how efficient I'm being.

Comment by frontier64 on The 10,000-Hour Rule is a myth · 2021-02-05T15:24:03.196Z · LW · GW

I find this to be a severely lacking refutation of Gladwell's point. The main argument being that Ericsson, who collected the data which Gladwell cites to, disagrees with his point. Seeing that the average expert has 10,000 hours of practice in their field a reasonable conclusion is that you should try to practice 10,000 hours if you want to become an expert. Just because Ericsson disagrees with that doesn't mean it's not a perfectly reasonable conclusion.

Comment by frontier64 on How do I improve at being strategic? · 2021-01-21T18:19:23.476Z · LW · GW

The first step that Anna points out is "Ask ourselves what we're trying to achieve" or in other words, know your goal. Since you have a desire to be more strategic you probably already have a goal in mind and realized that being more strategic would be an effective subgoal. From the rest of your post I think you've substantially worked on some of the other steps as well.

If you're struggling fulfilling the rest of the steps Anna laid out my recommendation is to just do things which may work towards achieving your goal that are very outside your comfort zone. That will pull you out of your pre-existing habits and get you to start evaluating different strategies instead of continuing to follow the strategy you've already worked yourself into.

If you're a procrastinator, start working on something that's a long term goal immediately for at least a few hours without breaks even if you start to think it might not be effective. If you think it's not effective that may be because of akrasia taking over once you actually start working on it.

If you are fearful of offending people go to an online or in person marketplace and start low-balling people with ridiculous offers and continually press them to make a deal favorable to you. Make the situation uncomfortable enough and you'll realize you have the ability to deal with the social awkwardness when you're trying to work towards your goal.

This is Anna's step e and I encourage working on this step because from your post it seems like you've already put good work into everything that comes before it.

My bad if this is more of tactics rather than the strategy tips you were looking for.

Comment by frontier64 on Saying "Everyone Is Biased" May Create Bias · 2021-01-21T16:57:06.154Z · LW · GW

This formulation of evidence completely disregards an important factor of bayesian probability which is that new evidence incrementally updates your prior based on the predictive weight of the new information. New evidence doesn't completely eradicate the existence of the prior. Individual facts do not screen off demographic facts, they are supplementary facts that update our probability estimate in a different direction.

Comment by frontier64 on RationalWiki on face masks · 2021-01-15T18:36:27.081Z · LW · GW

Your point would be correct if the recent bans were about hate speech and calls to violence. The claim that recent bans were solely about hate speech and calls to violence however is factually incorrect and therefore your point is wrong. The most popular banned topic of discussion is the validity of the 2020 election, an epistemological question. Very nonviolent and non-hatey figures such as Ron Paul are banned without any stated reasons.

Comment by frontier64 on The 4-Hour Social Life · 2020-12-30T04:22:02.827Z · LW · GW

Easier solution: wait until a person who is following Isusr's strategy weeds you out and bam you have your equally extraordinary match. The only failure states are when Isusr's strategy doesn't manage to distinguish the extraordinary people they're looking for from everyone else, or when you're not extraordinary.

Comment by frontier64 on How to reliably signal internal experience? · 2020-12-28T05:55:23.559Z · LW · GW

I think knowing about the actual object level problem here would help in crafting a suitable solution. My main question is why are you informing your friends that you're at your limit?

Are you participating in some group activity (e.g. going to the gym) that you feel you have to drop out of? If so I strongly recommend just working through the pain until what's stopping you is no longer pain winning over willpower but physical incapability to proceed. At that point you don't even need to tell your friends you're at your limit because no matter what you're going to flop to the ground unable to continue with the activity. You clearly want to do the group activity, because you haven't even posited quitting as an option, so rely on your decision to do the group activity and trust that you're not going to cause any lasting harm to yourself by working through the pain.

If you're not participating in a group activity (e.g. you had to take off sick from work and you told your friends about it the next day) I see good reasons to not inform your friends that you're at your limit at all. You know what their expected response is, and you don't think that expected response is helpful. So might as well just not go through the routine that will give you the bad response.

Comment by frontier64 on The Power to Demolish Bad Arguments · 2020-12-26T22:11:43.413Z · LW · GW

I don't understand your usage of the term "hanging a lampshade" in this context. I don't think either Steve's or Liron's behavior in the hypothetical is unrealistic or unreasonable. I have seen similar conversations before. Liron even stated that the Steve was basically him from some time ago. I thought hanging a lampshade is when the fictional scenario is unrealistic or overly coincidental and the author wants to alleviate reader ire by letting them know that he thinks the situation is unlikely as well. Since the situation here isn't unrealistic, I don't see the relevance of hanging a lampshade.

If the article should be amended to include pro-"Uber exploits drivers" arguments it should also include contra arguments to maintain parity. Otherwise we have the exact same scenario but in reverse, as including only pro-"Uber exploits drivers" arguments will "automatically [...] generate bad feelings in people who know better the better arguments". This is why getting into the object-level accuracy of Steve's claim has negative value. Trying to do so will bloat the article and muddy the waters.

Comment by frontier64 on The Power to Demolish Bad Arguments · 2020-12-26T04:32:10.956Z · LW · GW

Making an unnecessary and possibly false object-level claim would only hurt the post. It's irrelevant to Liron's discussion whether Steve's claim is right or wrong and getting sidetracked by it's potential truthfulness would muddy the point.

Comment by frontier64 on Death Positive Movement · 2020-12-26T00:42:05.389Z · LW · GW

https://www.yudkowsky.net/other/yehuda

Eliezer has written extensively on why death is bad for everyone and my understanding closely aligns with his.

Comment by frontier64 on The Power to Demolish Bad Arguments · 2020-12-26T00:33:44.905Z · LW · GW

This comment leads me to believe that you misunderstand the point of the example. Demonstrating that an arguer doesn't have a coherent understanding of their claim doesn't mean that the claim itself is incoherent. It just means that if you argue against that particular person on that particular claim nobody is likely to gain anything out of it[1]. The validity of the example does not correlate to whether "Uber exploits its drivers!" or not.

You agree with Steve in the example and because the example shows Steve being unable to defend his point you don't like it. You should strive to understand however that Steve's incoherent defense of his claim has nothing to do with your very coherent reasons for believing the same claim.

I think that the example is strengthened if Steve's central claim is correct despite the fact that he can't defend it coherently.


  1. At least, that's my take. I haven't read the rest of this sequence yet so I don't know if Liron explains what you gain out of discovering that somebody's argument is incoherent. ↩︎

Comment by frontier64 on Death Positive Movement · 2020-12-11T22:10:09.382Z · LW · GW

The death positivity movement seems to misunderstand the point that the issue with death isn't some ancillary result such as people not getting buried in the exact way they desire, but rather that sapient human beings with thoughts, knowledge, memories, and emotions are ceasing to exist forever! Now if DPM thinks that there are issues in the way that death is handled that cause solvable negative externalities (besides people dying) that's all well and good and probably true. The problem is that they seem to equate solving those minor negative externalities with solving the inherent problem of death itself.

The website's name, "Order of the Good Death" is oxymoronic. Death is bad. Even if people can die at age 90 in exactly the way they want, have their remains taken care of exactly how they want, and be assured that their decaying body won't negatively impact the environment, their death is still bad. DPM implies this bizarro world where if they can just solve all these minor issues related to death that somehow the whole process will become good. If you could just take out all the fuss, dirtiness, and other minor negative externalities from torture then that practice could be made "good" as well.

I see no value in this movement and actually quite a bit of harm as it may successfully attract resources that could otherwise be used to solve the fundamental problems of aging and death towards solving non-issues like overcrowded burial sites.

Additionally, tenets 5 and 6 are clear warning signs of intersectional nonsense: "Let's throw some anti-racist and anti-sexist talking points into our philosophy to latch onto those movements and hopefully they'll throw some support our way." The rest of the website is littered with similar intersectional phrases as well. They're not there to solve any particular issue but to signal to others that the founders of this movement are Right-Minded Thinkers Who Should Be Supported By The Cause. Any movement that isn't explicitly related to anti-racism or anti-sexism that wastes bandwidth signalling to people that supporters of this movement are also anti-racists and anti-sexists just isn't practicing effective altruism and is instead virtue-signalling.

Comment by frontier64 on Parable of the Dammed · 2020-12-11T20:21:56.807Z · LW · GW

You've restated the moral in euphemistic terms. Some people do have the idea that they can trust others to give them a fair shake. That's wrong. You're right that the couple is behaving unfairly because of their own self-interest and the fact that they can get away with it, but regardless their actions are still unfair.

Comment by frontier64 on Parable of the Dammed · 2020-12-10T01:15:16.977Z · LW · GW

In a fourth eventuality the opposed family notices the couple's flagrant breach of the peace agreement and induces a third party to intervene and render their opinion on whether hostile dam-building is a violation of property dispute norms. The third party arbitrator sees an opportunity to grow fat from the conflict and continually requests ever larger bribes from both sides before eventually drawing an arbitrary line in the ground and calling it a border. Of course the border isn't amenable to either family, but they are powerless to challenge the will of the arbitrator because that would post facto make their bribes a waste and not improve their situation one bit. The arbitrator realizes there is a lot of free slack to be gobbled up in these property disputes and starts up his own racket.

Moral: don't trust anybody to be fair to anybody but themselves.

Comment by frontier64 on Newcomb's Problem and Regret of Rationality · 2020-12-10T00:48:07.771Z · LW · GW

An alternate solution which results in even more winning is to cerqvpg gung V znl or va fhpu n fvghngvba va gur shgher. Unir n ubbqyhz cebzvfr gung vs V'z rire va n arjpbzoyvxr fvghngvba gung ur jvyy guerngra gb oernx zl yrtf vs V qba'g 2-obk. Cnl gur ubbqyhz $500 gb frpher uvf cebzvfr. Gura pbzcyrgryl sbetrg nobhg gur jubyr neenatrzrag naq orpbzr n bar-obkre. Fpnaavat fbsgjner jvyy cerqvpg gung V 1-obk, ohg VEY V'z tbvat gb 2-obk gb nibvq zl yrtf trggvat oebxra.

Comment by frontier64 on The Incomprehensibility Bluff · 2020-12-07T16:18:30.400Z · LW · GW

One would expect that in the humanistic fields this kind of bluff would be much harder to pull off, since you have less excuses to be obscure and not make sense

You would think so, but all of modern sociology is a competition between authors to draft as many new words and crazy theories as they can. So much so that experts in a particular field of sociology can't tell intentional gibberish from a valid article if the paper sticks to the standard form. See also https://www.youtube.com/watch?v=97FuO-hEhQo

Comment by frontier64 on What is the right phrase for "theoretical evidence"? · 2020-11-02T22:08:53.607Z · LW · GW

Should we stop there and take it as our belief that there is a 20% chance that they are effective? No!

You need not stop there, but getting an answer that is in conflict with your intuitions does not give you free reign to fight it with non-evidence. If you think there's a chance the empirical evidence so far may have some bias you can look for the bias. If you think the empirical evidence could be bolstered by further experimentation you perform further experimentation. Trying to misalign your prior in light of the evidence with the goal of sticking to your original intuitions however is not ok. What you're doing is giving in to motivated reasoning and then post-hoc trying to find some way to say that's ok. I would call that meta-level rationalization.

Comment by frontier64 on What is the right phrase for "theoretical evidence"? · 2020-11-02T21:53:38.347Z · LW · GW

Another phrase for Theoretical Evidence or Instincts is No Evidence At All. What you're describing is an under-specified rationalization made in an attempt to disregard which way the evidence is pointing and let one cling to beliefs for which they don't have sufficient support. Zvi's response wrt masks in light of the evidence that they aren't effective butting up against his intuition that they are has no evidentiary weight. He was not acting as a curious inquirer, he was a clever arguer.

The point of Sabermetrics is that the "analysis" that baseball scouts used to do (and still do for the losing teams) is worthless when put up against hard statistics taken from actual games. As to your example, even the most expert basketball player's opinion can't hold a candle to the massive computational power required to test these different techniques in actual basketball games.

Comment by frontier64 on What risks concern you which don't seem to have been seriously considered by the community? · 2020-10-28T22:57:42.029Z · LW · GW

I'm afraid that we're technologically developing too slowly and are going to lose the race to extraterrestrial civilizations that will either proactively destroy us on Earth or limit our expansion. One of the issues with this risk is that solving it runs directly counter to the typical solutions for AI-risk and other We're Developing Too Quickly For Our Own Good-style existential risks.

To prevent misaligned AI we should develop technology more slowly and steadily; MIRI would rather we develop AI 50 years from now than tomorrow to give them more time to solve the alignment problem. From my point of view that 50 years of slowed development may be what makes or breaks our potential future conflict with Aliens.

As technology level increases, smaller and smaller advantages in technological development time will necessarily lead to one-sided trounces. The military of the future will be able to absolutely crush the military from just a year earlier. So as time goes on it becomes more and more imperative that we increase our speed of technological development to make sure we never get trounced.

Comment by frontier64 on Philosophy of Therapy · 2020-10-28T00:21:25.541Z · LW · GW

Thanks for your enlightening post, I feel like it helped me kind of build a broad outline of therapy. I have a question about how much info a non-therapist should have about therapy: Do you think there's educational value in reading patient case studies as a lay person? And a more specific version of that general question: could reading case studies on a particular pathology help me better understand and empathize with my friends, acquaintances, and other people I meet who have a trait which is a less severe form of that pathology?

From my subjective experience:

I have a high school friend, Bob, who is well known--among both our mutual friends and his friends whom I've only met rarely--as a habitual liar and over-exaggerator. Throughout high school and for many years of knowing him afterwards I would more and more often respond to his lies with scorn, and occasionally gossip with others about him, and in general had a lesser opinion of him than I would have had otherwise. Within the past year I read a few case studies on pathological lying and it's actually helped me empathize with him quite well.[1] I had heard about and had a rough definition of pathological lying before, but I wasn't quite able to internalize that knowledge until I read the case studies.

I saw many of the same behaviors described in the pathology in Bob. But the most important takeaways for me from those case studies are the proper ways to peacefully coexist with Bob and a greater ability to empathize with him now that I have non-malicious theories to explain his behavior. Overall it's improved my opinion of him significantly and allowed me to speak with him again with significantly less tension.

A similar thing has happened with one other friend and reading case studies on a different pathology.

I want to be clear that I am not predicting that any of my friends have disorders; Alice's propensity to lie hasn't negatively affected him life too badly from what I've seen beyond him losing a few friends because they were very sensitive to lying. But I've heard that you can view pathologies as the extremes of personality traits.[2] If so then it would stand to reason that studying a personality trait at one extreme can give you a glimmer of a similar personality trait in moderation.

I still see a non-negligible chance that my behavior here could be a mistake and that reading a case study and viewing a friend as a less extreme version of the patient in that study is just as bad as pathologizing.


  1. I've seen it referred to as "pseudologia fantasia" as well. Apparently it's not a listed disorder, but still treated as one by some psychologists? https://pubmed.ncbi.nlm.nih.gov/12108140/. ↩︎

  2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4125199/ I'm not sure if this is widely accepted or up to date. ↩︎

Comment by frontier64 on Top Time Travel Interventions? · 2020-10-27T21:06:38.301Z · LW · GW

A suitable answer to this problem would require at minimum a few months focused planning. And after that's done I'm sure the preparation period will last for a few further months. Time in the present should be minimized to reduce the risk of something happening in the modern world which will make the time travel impossible. But to make sure I've compiled all the helpful technology and information to bring back with me from the present will probably take a medium-sized team of scientists and engineers a few month's work. Even more to make sure it's all properly summarized and indexed. So I'm going to write a full answer eventually, but that's going to be an answer without the necessary months of thought that would be required if this was challenge was real.

The opportunity to go back in time with a requirement that it's done at the drop of a hat with nothing but the clothes on one's back and the thoughts in one's head would be a godsend to humanity. The opportunity to go back in time and bring future technology and materials with me, while I get to prepare for it ahead of time, and I get to bring tomes of knowledge with me, that is an opportunity to be God. I believe that having such an opportunity and spending it on going back ~20 years and slowing AI risk or trying to make people a little more moral would be a waste.

Comment by frontier64 on A tale from Communist China · 2020-10-20T12:20:30.369Z · LW · GW

I have cited my sources in both my original comment and the followup, and I have included footnotes in my last reply. I cited Wikipedia for my original claims that the US was not involved in Pinochet's coup. This comment is pure falsehood about what I've previously said, creating some weird, fake strawman that you can complain about for not citing any sources.

But, whether it's right or wrong, the idea that the US was involved in the Pinochet coup is the conventional opinion

I don't care what most morons on the internet think. Read the Wikipedia article on the US involvement in Chile and half the statements in there are that the US did absolutely nothing to help Pinochet's coup, they only fucked some earlier coup attempts up, because that's the only logical conclusion one can come to when they're looking at the actual underlying facts and not some historian's bogus, lying opinion. You're the one making a claim, 'The US was involved in Pinochet's 1973 Chilean revolution.' You have to support that claim. What the hell am I supposed to do to argue against that if I don't even know the basis of your claim? Am I supposed to cite some random historian that says the US wasn't involved in the Chilean coup? No. That's stupid and I don't believe that debates are supposed to be one side citing some historian that says this and another side citing another historian that says the opposite. Then we'll get nowhere. But if you really want nonsense like that go read this article: https://www.washingtonpost.com/opinions/on-us-involvement-in-chilean-coup/2013/12/12/a61e9ecc-6125-11e3-a7b4-4a75ebc432ab_story.html that is a conclusion from a person with knowledge who is citing few underlying facts, just like your historian quotes.

If you want a response actually give me a theory of the case, give me some explanation for how the US was behind the coup. I'm disinclined to respond if you choose to strawman my comments again.

Comment by frontier64 on A tale from Communist China · 2020-10-20T04:52:34.856Z · LW · GW

You haven't quoted a single factual allegation that would be considered US involvement in Chile so there's nothing for me to contest here. The only quote that one could consider evidence is this supposed admission by Nixon:

"After a review of recordings of telephone conversations between Nixon and Henry Kissinger, Robert Dallek concluded that both of them used the CIA to actively destabilize the Allende government

I've explained that the US attempts to destabilize Chile prior to 1973 actually served to strengthen support of Allende, who was not well liked in the immediate aftermath of the election[1][2]. The acts of American aggression against Chile including the CIA-backed assassination of Chilean official René Schneider made Allende seem like more of a good guy than he actually was.

Besides that I don't know what actual facts these random historians are basing their ultimate conclusions on but you didn't include that information anywhere. What is this "extensive evidence" these Peters are referring to? What exactly did the Nixon administration do to support Pinochet?

Did the US give him money? No, we exchanged in trade.

Did the US give him weapons? No he had weapons of his own.

Did the US train his soldiers? No we only trained his economists.

If you are claiming that the US helped Pinochet's coup you have to say what they actually did to help! You can't just point to a random historian making an ultimate conclusion without citing any underlying facts and think that's that.


  1. He only got 36% of the vote and was running against two other candidates from anti-communist parties. (https://en.wikipedia.org/wiki/1970_Chilean_presidential_election) ↩︎

  2. He openly assassinated political rivals and let communists sow terror in the streets while he forced the military, ironically led by Pinochet, to crack down on anti-Allende riots. (http://nixontapeaudio.org/chile/517-004.pdf) (https://en.wikipedia.org/wiki/Augusto_Pinochet) ↩︎

Comment by frontier64 on A tale from Communist China · 2020-10-19T15:13:33.866Z · LW · GW

This meme that the US had anything to do with Pinochet's coup has to stop. Your own Wikipedia link says that the US did not have anything to do with the coup,

"Although CIA did not instigate the coup that ended Allende's government on 11 September 1973, it was aware of coup-plotting by the military, had ongoing intelligence collection relationships with some plotters, and—because CIA did not discourage the takeover and had sought to instigate a coup in 1970—probably appeared to condone it."

The CIA underwent efforts to destabilize Allende's regime in 1970 that were unsuccessful and only served to consolidate power around him as the Chilean people were not fond of the idea that foreigners were attempting to kill their president. Pinochet's coup however was an internal affair, he was not contacted by the CIA, he received no monetary or military assistance from the CIA, and the CIA actually made his job harder by trying to start a military coup and completely failing three years earlier.

But you go beyond alleging mere US involvement and say that "The USA overthrew Chile's democratic government" as if the Chilean military was a branch of the US Armed Forces. That is false and completely out of line with reality.

Comment by frontier64 on The Darwin Game · 2020-10-15T14:59:24.125Z · LW · GW

I pledge to join if there is at least 5 people total joined.

Comment by frontier64 on The Darwin Game · 2020-10-14T23:38:53.025Z · LW · GW

Auto-self-recognition is trivial when you have the ability to read opponent's source code. You can detect you're playing against a copy of yourself and your 'opponent' will do the same. Although in this case you will lose a small amount of points randomly deciding which bot will choose 3 and the other 2 assuming there is no deterministic way to differentiate the clones.

ETA: I see that you posted your comment before Isusr said you could view opponent's source.

Comment by frontier64 on The Halo Effect · 2020-10-12T22:05:58.534Z · LW · GW

Swapping pictures leading to predicting higher intelligence is the point Jeff makes. A symmetrical face is a piece of evidence that positively correlates with higher intelligence just as a description of someone's accomplishments is evidence of intelligence. The description is much better evidence, but the attractiveness remains somewhat important.

Comment by frontier64 on Postmortem to Petrov Day, 2020 · 2020-10-04T16:04:53.770Z · LW · GW

You make a good point that I haven't seen posted elsewhere:

Chris got a message that he had to enter the codes, or else bad things would happen, just like Petrov got a sign that the US had launched nukes, and that the russian military needed to be informed

The attacker targeting Chris made this year's Petrov Day experiment more accurate to history than the one in 2019. In 2019 there was no incentive to push the button. In 2020 there was an apparent, but false incentive to do so, just like there was for Petrov.

Comment by frontier64 on Postmortem to Petrov Day, 2020 · 2020-10-04T15:42:21.446Z · LW · GW

That's a reasonable assumption, but it's wrong in this case. Ben greatly values both LessWrong staying up and this serious experiment celebrating Petrov day. But the experiment can be serious only if he commits to shutting down the site when somebody enters the codes. Ben thought there was only a 20% chance of that happening. So the other reasonable conclusion is:

Value of Petrov Day Experiment > 0.2 * Value of LessWrong not going down for a day

And Ben acted accordingly.

Comment by frontier64 on Babble challenge: 50 ways of sending something to the moon · 2020-10-03T20:38:32.127Z · LW · GW

I also had a candle in my field of view and a very strong belief that it could somehow help get me to the moon. There's something powerful about candles.

Comment by frontier64 on Babble challenge: 50 ways of sending something to the moon · 2020-10-03T20:23:53.784Z · LW · GW
  1. Pay SpaceX to send it for me.

  2. Purchase the space shuttle and relaunch it, crash land on moon.

  3. Pray to God to teleport the object to the moon.

  4. Find a wormhole to moon somewhere on Earth.

  5. Turn off the Large Hadron Collider when the particle is pointed on an intercept course for the moon.

  6. Steal rocket from SpaceX and use it.

  7. Get a really fast jet to go very fast and high into the air then fire a very fast bullet at our highest point towards the moon.

  8. Very large cannon.

  9. Cut a hole through the Earth, drop object through, slowly accelerate it with periodic explosions as it falls and then rises, object exits other side of hole at velocity required to reach moon.

  10. Pay a rocket scientist and some engineers to design and build me a rocket.

  11. Use very strong genetic selection pressure on animals with fitness being maximum throw speed; after many generations have animal with best throw launch ball at moon.

  12. Ask on Quora, "How do I cheaply send an object to the moon" and follow the highest rated response.

  13. Dropping a very large rock from a very tall height on a very long seesaw.

  14. Very strong nuclear explosion with object placed on top.

  15. Use cryonics to travel to the future and use cheap space travel there to get to the moon.

  16. Use Predict-O-Matic to determine what objects on Earth will go to the moon soon and select one of them.

  17. Become president of USA or another moon landing-capable country and then have space agency send object to moon.

  18. Very strong spring.

  19. Very sturdy and bendy tree branch.

  20. Build a very large tower then use any of the applicable previous tricks from the top (i.e. fire Very Large Cannon from top of the tower).

  21. Very strong chemical explosion with object placed on top.

  22. Really fast merry-go-round/centrifuge.

  23. Really hot fire sends hot air balloon soaring.

  24. Steal a good answer from somebody else in this thread.

  25. Compress water.

  26. Shine laser at moon; photons are objects.

  27. Create myth that something important is on moon and this particular object needs to go to moon to complete important ritual; somebody else will do it.

  28. Buy many model rockets, construct them in a larger rocket in different stages, fire.

  29. Make perfectly elastic+1 object and let it bounce infinitely to moon.

  30. Fast train.

  31. Long snake.

  32. Make Earth spin faster, velocity to reach moon easier to reach.

  33. Move Earth to the moon with massive Earth Engines.

  34. Wait for Aliens to come to Earth, steal Alien ship, reach moon.

  35. Make a very strong commitment to send an object to the moon when it's easy, force this commitment on my children and convince them to force the same commitment on their children, eventually space travel will be easy enough for one generation to reach the moon with the object easily.

  36. Stand on top of very large chair.

  37. Air gun with a lot of air.

  38. Multi-stage cannon.

  39. Get Netflix to sponsor a TV mini-series taking place on the moon, use budget to get to the moon.

  40. Marry an astronaut, have her take the object to the moon.

  41. Become an astronaut myself and take the object to the moon.

  42. Develop Psychic powers and launch object to the moon.

  43. Become president of a country not capable of a moon landing, improve their economy and technology, then have improved space program send object to moon.

  44. Large electrical explosion with object on top.

  45. Abuse the strong nuclear force to push an object with sufficient velocity to reach the moon.

  46. Claim that the many worlds interpretation means that the object is already on the moon in at least one worldine.

  47. Make a really fast jet and have it continually gain speed until it can reach the moon.

  48. Make a stick long enough to reach earth orbit then send an object at low impulse to the moon.

  49. Give up and claim that I don't even want to send that object to the moon at all.

  50. Just make some money and create a conventional rocket and launch it at the moon.

Comment by frontier64 on The Goddess of Everything Else · 2020-10-01T17:38:57.570Z · LW · GW

So the people left Earth, and they spread over stars without number. They followed the ways of the Goddess of Everything Else, and they lived in contentment. And she beckoned them onward, to things still more strange and enticing.

I was quoting the post.

Comment by frontier64 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-29T18:02:25.044Z · LW · GW

Game theory was pioneered by Schelling with the central and most important application being handling nuclear armed conflicts. To say that game theory doesn't apply to nuclear conflict because we live in an imperfect world is just not accurate. Game theory doesn't require a perfect world nor does it require that actors know each other's source code. It is designed to guide decisions made in the real world.

Comment by frontier64 on The Goddess of Everything Else · 2020-09-29T01:54:19.746Z · LW · GW

Spreading across the stars without number sounds more like a "KILL CONSUME MULTIPLY CONQUER" thing than it sounds like an "Everything Else" thing. I'm missing something of the point here.

ETA: Is the point that over time Man evolved to be what he is today, we have a conception of right and wrong, and we're the first link in the chain that actually cares about making sure our morals propagate forward as we evolve? So now the force of evolution has been co-opted into spreading human morality?

Comment by frontier64 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-28T15:18:52.392Z · LW · GW

Your strategy is only valid if you assume that the community will have adequate knowledge of what's happened before wrt people who have provided information that should damage their reputation (i.e. confessed). The optimum situation would be one where we can negatively react to negative information, which will disincentivize similar bad actions in the future, but not disincentivize future actors from confessing.

From another line of thinking, what's the upside to not disincentivizing future potential confessors from confessing if the community can't take any action to punish revealed misbehavior? The end result in your preferred scenario seems to be that confessions only lead to the community learning of more negative behavior without any way to disincentivize this behavior from occurring again in the future. That seems to be net negative. What's the point in learning something if you can't react appropriately to the new knowledge?

If all future potential confessors have adequate knowledge of how the community has reacted to past confessors and can extrapolate how the community will react to their own confession maybe it is best to disincentivize these potential confessors from confessing.

Comment by frontier64 on Honoring Petrov Day on LessWrong, in 2020 · 2020-09-27T00:02:17.108Z · LW · GW

I think the lesson is that if you decide to launch the nukes it's better to claim incompetence rather than malice because then opinion of you among the survivors won't suffer as much.

Comment by frontier64 on Making the Monte Hall problem weirder but obvious · 2020-09-17T23:10:51.123Z · LW · GW

A few months ago I tried a similar process to this with my dad who's pretty smart but like most does not know the Monty Hall Problem.

I put three cards down, showed him one ace which is the winner, shuffled the cards so that only I knew where the ace was and told him to pick a card, after which I would flip over one of the other loser cards. We went through it and he said that it didn't matter whether he switched or not, 50-50. Luckily he did not pick the ace the first time so there was a bit of a uh huh moment.

I repeated the process except using 10 total cards. As I was revealing the loser cards one by one he started to understand that his chances were improving. But he still thought that at the end it's a 50-50 between the card he chose and the remaining card although his resolve was wavering at that point.

I hinted, "What was your chance of selecting the ace the first time", he said, "1 out of 10", and then I gave him the last hint he needed saying, "And if you selected a loser what is that other card there?"

A few seconds later it clicked for him and he understood his odds were 9/10 to switch with the 10 cards and 2/3 to switch with the 2 cards.

He ended up giving me additional insight when he asked what would happen if I didn't know which card was the ace, I flipped cards at random, and we discarded all the worldlines where I flipped over an ace. We worked on that situation for a while and discovered that the choice to switch at the end really is a 50-50. I did not expect that.

Comment by frontier64 on Ugh fields · 2020-08-24T14:39:35.182Z · LW · GW

This phenomenon you call Ugh Fields sounds like a less serious form of PTSD. While most people think PTSD is about constantly reliving the traumatic event, one of the most important symptoms is that the patient tries to avoid thinking through their trauma when it does come up.[1] Something will trigger memory of their trauma, but then they'll force themselves to stop thinking about it while their mood suffers. One of the methods for treating PTSD [2] involves getting the patient to actually think seriously about the trauma that happened to them, talk it through out loud with somebody, and that confrontation with what they've constantly avoided helps them get over the negative conditioning.

I'm fairly certain both Ugh Fields and PTSD describe the same mental process just at different levels of severity


  1. https://www.mayoclinic.org/diseases-conditions/post-traumatic-stress-disorder/symptoms-causes/syc-20355967 ↩︎

  2. https://www.youtube.com/watch?v=nyMso_CFU7s ↩︎

Comment by frontier64 on Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem) · 2020-07-26T23:33:11.461Z · LW · GW

The Three Body Problem doesn’t say that environmentalism leads to a desire for the world be conquered by Trisolaris. Liu explains that environmentalism is often a symptom of a deep hatred for humanity. So rather than sharing a link of direct causation, environmentalism and Trisolaran worship are both co-related to an inner hatred for humanity. Many of these environmentalist intellectuals come from a place of hatred towards humanity rather than love for the natural world. It is hard to consciously acknowledge one’s hatred for humanity and it’s next to impossible to hold that view publicly. So these hateful people attach themselves to the ideology that most clearly aligns with their inner desires, environmentalism.

At its most pure, environmentalism really is misaligned with humanity. The concept of environmentalism itself necessitates that it is incongruous with humanism. There are paths that improve the human race, and paths that improve the environment. If these paths are co-aligned then there is no need for separate terms and there would be no conflicts between environmentalists and humanists. But because there are conflicts the paths must be separate. What is best for humans cannot in all cases be best for the environment and vice-versa. An environmentalist is someone who comes to a fork in the road and always chooses the path which is best for the environment to the inevitable detriment of humanity.

If you put some thought into it (or you could say, take a cynical view) you can see then that environmentalism is for many a front for their inner hatred of humanity. Therefore when a new opportunity comes to support a powerful group with the explicit goal of destroying humanity, the ETO, well these environmentalist intellectuals just can’t pass it up. It’s the same way the black nationalists used to ally themselves with civil rights groups because being an explicit black nationalist wasn’t tenable at the time. Now that their true ideology has a place in the world these people have splintered off their previous host ideology and become more true to themselves and their real goals.

You also misunderstand the point of the Dark Forest theory. You misunderstand it most explicitly here,

What is most weird is, all of this applies back on Earth. Life still expands without limit. Resources remain finite. Offense is far cheaper than defense. Why don’t we inhabit the dark forest?

You have completely neglected two of the central points of Dark Forest theory which is that the cosmic civilizations in question have completely separate cultures and the large number of civilizations in the galaxy. At the very least, the level of cross-cultural communication required to sufficiently allay the fears of both sides is nigh impossible between alien civilizations. Earth doesn’t have the issue of cross-cultural communication because compared to cultures from an alien civilization, the Earth has just one culture. The chain of suspicion stretches long and when your only method of communication requires rough translation it is very easy to get stuck at a point where Civilization A thinks there’s a 25% chance that Civilization B thinks that Civilization A is going to launch a preemptive strike against them. Civ A and Civ B are in a continuous prisoner’s dilemma where if one chooses defect the other faces absolute annihilation.

The second point you forget is that it takes just one malevolent civilization out of the millions in the galaxy to ensure enforcement of the dark forest. Even if out of a million civilizations, 990,000 will disregard coordinates for a dark forest strike, even if 9,990 of the rest only respond with subtle probing, all it takes is one out of those ten remaining trigger-happy civilizations to destroy a whole star system. We never reached near that number of distinct civilizations on Earth.

Lastly, while I can see how you would dislike Liu's take on the ideology of equality, you haven't provided anything to dispute it. Many people in the modern age explicitly endorse the goal of harming the better off to promote equality in the world. Some of the most common scales used today to judge overall well-being of different nations focus entirely on economic and social inequality. Liu takes this liberal perspective to it's extreme and quite accurately at that. Escapism is the most extreme form of inequality and any liberal of the modern day would detest the very idea of the rich and powerful getting to flee a burning Earth to live forever as the rest of us suffer and die.

Comment by frontier64 on Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem) · 2020-07-26T23:22:45.063Z · LW · GW

Your criticism that Alpha Centauri isn't actually a three-body system and instead operates as a binary star system with another nearby star is palpably anal retentive. Liu takes a small liberty to create a difference between his fictional world and the actual world that's still clearly well within the laws of physics. That difference creates a cool situation wherein a tough problem in physics serves as the backdrop for an alien situation. He describes the three-body problem accurately and doesn't just use it as window-dressing. Yet you fault him for this smart inclusion. Poor take.

Comment by frontier64 on Karma fluctuations? · 2020-07-24T20:06:53.372Z · LW · GW

I understand this position and it's totally relevant to the question of when to downvote. However, I don't think it has much relevance to the question of when a user should upvote. If a person isn't interested in certain genres of topics, downvoting every post on one of those topics wouldn't improve discourse; it would lead to uniformity of topics. Only the few topics for which more people (accounting for karma weights) are interested in than uninterested in would remain more upvoted than downvoted. However, with the current system most people understand that this situation is exactly what the novote is for. If one doesn't have any interest in AI research then one should filter those posts where they can and disregard them where they can't.

I like the idea of automatically figuring out what topic a post is based on the upvote, novote, and downvote patterns of different users. Maybe some combination with that and the topic tags on posts could lead to a different, individualized karma system. Votes from users with similar interest in topics would have more weight for each other than they do for users with disparate interest in topics. Seems a little echo-chambery, but I see value in the idea.

I do see a bit of a incongruity between what you're describing and the the comment you linked to which I can't square. In actuality, eapache seems to have the ability to see the value in AI research topics, but regardless is uninterested in the topic himself. But what you're describing would lead to eapache not being able to discern value or lack of value in AI research topics because he's uninterested and thus hasn't invested the time to be able to appreciate them.

Comment by frontier64 on Open & Welcome Thread - July 2020 · 2020-07-09T23:14:35.729Z · LW · GW

Having read half of the sequences first and then Dune later I have the impression that 80-90% of Eliezer's worldview (and thus a big part of the LW zeitgeist) comes directly from the thoughts and actions of Paul Atreides. Everything LW from the idea that evolution is the driving force of the universe, to the inevitability of AGI and it's massive danger, conclusions on immortality, and affinity for childhood geniuses who rely on bayesian predictions to win are heavily focused on in Dune. Sure Eliezer also references Dune explicitly, I don't think he's hiding the fact that Dune inspires him or plagiarizing ideas. I share this connection I found because for me it was a big revelation. I never really knew how much of this community's behavior is based on Dune. I assume the feeling is similar when somebody reads The Hobbit after reading Lord of the Rings or watches Star Wars III after sitting through VII.

I'm left with the vague idea that people in general form most of their worldview from literature they find inspiring. Most of my personal differences with Eliezer clearly stem from me having read Heinlein instead of Herbert as a child. I think it's the same for many Christian people and The Bible or other Christianity-inspired texts.

I'm also left with some questions for people who read Dune before they found LessWrong: are the parallels and close inspiration immediately apparent to you? Do you see HPMOR Harry as a young Paul Atreides? Or do you think I overemphasize the connections between Eliezer's writings on rationality and Dune?

Comment by frontier64 on Open & Welcome Thread - July 2020 · 2020-07-09T23:14:14.021Z · LW · GW

Having read half of the sequences first and then Dune later I have the impression that 80-90% of Eliezer's worldview comes directly from that book. Everything from the idea that evolution is the driving force of the universe, to the inevitability of AGI and it's massive danger, his conclusions on immortality, and his affinity for childhood geniuses who rely on bayesian predictions to win. Sure he also references Dune explicitly, I don't think he's hiding the fact that Dune inspires him. I'm pointing this out because reading Dune was a bit of a revelation for me. I assume it's similar to how reading The Hobbit after Lord of the Rings of The Bible after the Book of Mormon must feel.

I'm left with the vague idea that people in general form most of their worldview from literature they find inspiring. Most of my personal differences with Eliezer clearly stem from my having read Heinlein instead of Herbert as a child. I think it's the same for many Christian people and The Bible or other Christianity-inspired texts.

I'm also left with some questions for people who read Dune before they found LessWrong: are the parallels and close inspiration immediately apparent to you? Do you see HMPOR Harry as a young Paul Atreides? Or do you see little connection between Eliezer's discussions of rationality and Dune?

Comment by frontier64 on Karma fluctuations? · 2020-06-11T21:43:01.994Z · LW · GW

Vote aggregation is how we get "this is worth being seen by people reading LW" from "I want to see this." Individuals know a bit more about their own personal preferences than they do about the personal preferences of others. Asking people to judge the personal preferences of others can only lead to a decline in accuracy of reporting.

I think this shift from personal preference to a focus on curating content for others shifts the approach to voting in a way that is likely to better result in votes that reflect what is worth reading when a person comes to the site rather than what people on LW like.

I think the point is that what people on LW like should be worth reading. I can imagine a few different general situations and only in the most ridiculous situation does changing the voting basis from "what you want to see/don't want to see" to "what you think the community should read/should not read" improve the content.

Situation 1: LW users have good taste and want to see more worthy posts.

In this case the switch would at best cause no change and at worse decrease the quality of posts because users may be worse at judging the opinions of others than they are at judging their own opinions.

Situation 2: LW users are unable to judge a post's worthiness.

If the users aren't able to judge worthiness, voting on what they think is worthy can't actually improve their score.

Situation 3: LW users have bad taste and what they want to see has little/negative correlation with worthiness, but they can still judge worthiness well if asked.

In this case switching the voting system would be effective. But this sounds ridiculous on it's face! How could someone capable of judging worthiness not actually prefer worthy content to unworthy content?

Comment by frontier64 on Open & Welcome Thread - June 2020 · 2020-06-11T18:12:53.944Z · LW · GW

Ideological conformity in the school system is not uniform. A person turning left when everybody else is turning right is much less likely to be a conformist than someone else turning right.

ETA: Without metaphor, our priors for conformist vs. well-reasoned is different for young rightists or non-leftists in the school system.

Comment by frontier64 on Open & Welcome Thread - June 2020 · 2020-06-11T17:13:33.526Z · LW · GW

A 16yo going into the modern school system and turning into a radical leftist is much more often than not a failure state than a success state.

Young leftist conformists outnumber the thought-out and well-reasoned young leftists by at least 10 to 1 so that's where our prior should be at. Hypothetical Wei then has a few conversations with his hypothetical, radical leftist kid and the kid reasons well for a 16yo. We would expect a well-reasoned leftist to reason well more often than a conformed leftist so that updates our priors, but I don't think we'd go as far as saying that it overcomes our original 10 to 1 prior. Well-reasoned people only make arguments sound well-reasoned to others maybe 90% of the time max and even conformists can make nice-sounding arguments (for a 16yo) fairly often.

Even after the conversations, it's still more likely that the hypothetical radical leftist kid is a conformist rather than well-reasoned. If hypothetical Wei had some ability to determine to a high degree of certainty whether his kid was a conformist or well-reasoned then that would be a very different case and he likely wouldn't have the concerns that his children will be indoctrinated that he expressed in the original post.

Comment by frontier64 on Open & Welcome Thread - June 2020 · 2020-06-09T18:43:15.550Z · LW · GW

Seems like an appeal to ?false? authority. May not be a fallacy because there's a demonstrable trend between technological superiority and moral superiority at least on Earth. Assuming that trend extends to other civilizations off Earth? I'm sure there's something fallacious about that, maybe too geocentric.