Posts

frontier64's Shortform 2021-08-28T20:53:55.869Z

Comments

Comment by frontier64 on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-19T00:55:42.130Z · LW · GW

It seems the state of the art with generating GPT-3 speech is to generate multiple responses until you have a good one and cherry-pick it. I'm not sure whether including a disclaimer explaining that process will still be helpful. Yes there's a sizable number who don't know about that process or who don't automatically assume it's being used, but I'm not sure how big that number is anymore. I don't think Isusr should explain GPT-3 or link to an OpenAI blog every time he uses it as that's clearly a waste of time even though there's still a large number of people who don't know. So where do we draw the line? For me, every time I see someone say they've generated text with GPT-3 I automatically assume it's a cherry-picked responses unless they say something to the contrary. Because I know from experience that's the only way to get consistently good responses out of it. I estimate that a lot of people on LW are in the same boat.

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-11T18:25:34.006Z · LW · GW

You know, I wrote a whole reply but your comment isn't worth responding to.

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-11T16:31:50.882Z · LW · GW

I agree that it isn't strong evidence. I should have made my point more explicit. My point is that Ooziegooen mentions the vitriol as if it is evidence that DiAngelo's argument has value and should be discussed. If anything it's evidence against that notion (however weak it may be).

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-11T16:28:32.741Z · LW · GW

My response is fine in tell culture too no? I'm stating what I believe to be true of their comment. Why is it ok for ozziegooen to speak truthfully in his comment but it's not ok for me to reply truthfully wrt to my impression of his comment?

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-08T22:50:13.027Z · LW · GW

I agreed that an "I am tapping out of this" comment is helpful until I experienced it and realized that the experience is quite unpleasant. There's something particularly stinging about being told that a discussion with you can't be productive. I think I wouldn't be effected at all if the non-response was "I am tapping out of this." without any particular reason being given.

I think it has to do with Jordan Peterson's 9th rule for life, "Assume the person you're listening to might know something that you don't". That just just makes sense to me. I don't quite understand why some people care about vitriolic comments on the internet. To me, vitriolic comments are par for the course and bringing it up is an obvious attempt to play the victim card for sympathy. But hey ozziegooen seems like a well-written dude so maybe he has a good explanation for why I should care about whether or not people have written scathing online reviews of DiAngelo's book. Or maybe he has another insight into the topic that I couldn't predict. Definitely his last response to me gave me a lot of information I didn't already know so for me the interaction was a net positive.

Saying "we can't have a productive discussion" in response to a two sentence reply completely goes against that 9th rule. It's an acknowledgement that the responder is listening to me, because he responded to my comment. But he's also stating that he thinks I have literally nothing to offer him by way of new information and vice-versa. That's pretty low!

I am certainly more sensitive on this issue than most people here. If ozziegooen's comment wouldn't seem insulting to others then really the issue lies entirely with me and I'll adapt to the style of decorum that fits most people. I don't want to jump at conduct that the LW community thinks is fine.

On a different note, I agree with you that people should feel free to tap out of discussions. I don't mind if someone doesn't wish to discuss further. I've tapped out of many conversations myself for a variety of reasons and sometimes the reason is I don't think the conversation will be productive.

I'm not going to respond any further after this comment because I don't think this back-and-forth will be productive. [1]


  1. I'm just saying this to give you the experience. I don't mean it at all. But even then I feel bad saying it because it sounds so rude to me! ↩︎

Comment by frontier64 on What Motte and Baileys are rationalists most likely to engage in? · 2021-09-08T02:20:25.713Z · LW · GW

I think there's a common Motte and Bailey with religion

Motte: Christianity and other religions in general are almost certainly untrue. Adherents to religions have killed many people worldwide. The modern world would be better if more religious followers learned rationality and became atheists.

Bailey: The development and continued existence of religion has on the whole been a massive net negative for humanity and we would be better off if the religions never existed and people were always atheists.

I don't even think the bailey is outright stated that often by smart rationalist as much as it is sometimes implied and only stated outright by zealous, less-smart atheists. The zealous atheists are likely succumbing to the affect heuristic and automatically refute the assertion that religion may have been a net positive historically even if it is no longer worthwhile. But they most often defend the claim that religion was terrible for humanity by citing to the Motte.

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T23:21:37.863Z · LW · GW

For instance, when I read a book of physics, I don't expect the author to cater to my folk definitions of "work", "energy", "power", "momentum"

Since you assume that physics book authors won't cater to the laymen's ordinary definition of the physics terms of art you may be surprised then reading most books on classical physics. The authors go to painstaking effort to make their content accessible to laypersons. I have not yet read a textbook on classical physics that didn't take the time to explain that "work" in a physics context means Force x Distance and only refers to what you do at your day job if you're pushing a cart around or lifting a tray of food. I know this because I was a computer science undergrad who took a few physics courses as electives and was surprised at how accessible the textbooks were given that they were of course designed for physics undergrads.

Also no physicist claims that their definitions are the "correct technical" ones or are somehow better or more useful than the ordinary definitions. Many physicists I know feel that physics terms which share a spelling with colloquial terms should be changed on the physics side of things to prevent confusion. Or at the very minimum the distinction should be kept clear.

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T18:23:43.398Z · LW · GW

I find non-responsive responses to be entirely lacking any sort of good faith and they come across as quite rude. It's an attempt to signal you hold some sort of moral high ground, that you think you're literally too good to even have a discussion with someone else. It's insulting. If I don't want to respond to a particular comment I don't respond. I don't say "I don't think talking with you will be productive."

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T17:35:47.318Z · LW · GW

Defending a position by pointing out that a portion (however big or small) of the critics of the position are 'vitriolic' isn't actually a valid argument. If people really hate something so much so that they get emotional about it that's still pretty good evidence that the something is bad.

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T17:33:37.780Z · LW · GW

It's somewhat ironic how she defends her actions as not racist because her definition of racism requires institutional power, yet the actual power dynamics of her position as an outside consultant often grant her massive institutional power which she then abuses. Folks deride DiAngelo's arguments on white fragility and racism rightfully so because she takes every attempt to force her opinion upon others using her role as "racial equity consultant" rather than actually debate the issue. Her methods are: get white people working at this company to understand and agree with my position by force and the implicit threats that they will be fired, transferred, or reprimanded if they don't comply. Yet somehow she gets away with the lie that she's not racist because she's not abusing institutional power.

Additionally, systemic racism as you call it is still factually invalid. Hiring trends show that white people have the least in group bias out of all hiring groups. The assumption is that systemic racism exists and that it's perpetuated by white people and that it benefits white people but that assumption is just the map and you should have a good reason to believe that the map is accurate before acting on it. The majority of race-based academic literature puts the cart before the horse. It's focused entirely on how to solve structural racism, with little to no research demonstrating that structural racism actually exists in the first place.

Comment by frontier64 on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T17:25:29.175Z · LW · GW

The problem is this combines with DiAngelo's other constant assertion that the best way for white people to understand their racism is to listen to black people who are preternaturally gifted at understanding racism. The end result being even worse than what Ben said originally: when a black person tells you you're racist your best response is "of course I was..."

Comment by frontier64 on Gravity Turn · 2021-08-31T17:54:43.044Z · LW · GW

No I don't think the academic process is aligned with making paradigm-shifting breakthroughs. Scott Alexander wrote a good piece that address this question. His purpose was to rebut the notion that modern scientists are way less efficient than their historical counterparts. I generally agree with his conclusion that the modern academic research apparatus isn't hampering scientific advancement in any way that would affect the trendlines. Yet I think he also cites a lot of good evidence which rebuts the opposite notion: that academic research has done anything positive for scientific advancement. Although Scott himself doesn't come to that conclusion.

Most of the examples of paradigm shifting work I can think of came from giving people who were very smart a large stipend of money to live off of and allowing them to research what they wanted (Newton, Liebniz, even Einstein counts as working as a patent examiner essentially gave him a stipend and an office where he got to do thought experiments). The other similar effective method is getting a lot of smart people working together, give them a bunch of money of course, and also give them a goal to accomplish within a few years (i.e. Manhattan Project, cryptography protocols).

Money and smart people seems to be a good baseline for what's required for scientific advancement. Academic research has a lot of money and smart people that's for sure! But it also has a lot of other features, the features you describe in your post, and it's not clear to me that they actually do anything. Based on historical evidence it seems that if we gave research grants to smart and personable university graduates and gave them carte blanche to do with the money what they wished that would work just as well as the current system.

Comment by frontier64 on A Small Vacation · 2021-08-30T23:22:19.924Z · LW · GW

Yes I think it's more accurate to say that Europeans were greedy but no greedier than any other culture at the time. And in fact they were the first group that decreased their greed and created the modern ethical understanding that taking can be wrong.

Comment by frontier64 on A Small Vacation · 2021-08-30T23:20:10.134Z · LW · GW

This sounds like nation building but instead of doing it there we're doing it here. So when it goes wrong as it almost always does we'll have a haven of terrorists in Nevada instead of western asia. I know this is scathing but really, what is the benefit to doing it here instead of doing it there and why would it be more likely to succeed?

Comment by frontier64 on Beware of small world puzzles · 2021-08-30T23:12:25.231Z · LW · GW

What further confounds the problem is that even if a smart researcher observes a quirky human behavior that doesn't align with homo economus and doesn't in any way say it's irrational many online news writers and people that cite research articles will start citing the researcher's paper and explaining how it means humans are irrational.

Comment by frontier64 on frontier64's Shortform · 2021-08-28T20:53:56.146Z · LW · GW

The ability to destroy parity between decisions made by the real agent and simulations of the agent lets the agent win games against simulator opponents.

  1. Different types of incoherence between real vs simulated choices grant different levels of power.

You're playing a game against a simulator Allen the Alien. The game is that you and Allen separately choose one out of 10 paths. If you pick the same one Allen wins; if otherwise you win. With no simulation Allen has a 1/10 chance of winning while you have a 9/10 chance of winning. If Allen simulates you accurately he then has a 1/1 chance to win.

If you're fully able to notice you're being simulated and completely obfuscate this fact from the simulator then simply having all simulations pick option 10 while real you picks option 1 is a 1/1 winning strategy. To achieve this you need some sort of pre-chosen general formula to break parity based on whether or not you're in a simulation.

You might not be able to tell if you're in a simulation while you do have the ability to break parity with other simulations of you and reality. Randomizing your path choice such that the simulation has a 1/10 chance of choosing each path and reality's choice has no correlation to the simulation's choice doesn't give you an automatic win, but completely destroys the simulator's advantage.

  1. Sometimes creating incoherence between different simulations is more powerful than just creating incoherence between all simulations and reality.
Comment by frontier64 on Is top-down veganism unethical? · 2021-08-23T21:29:45.634Z · LW · GW

If we're talking about whether top down meat is viable or not we don't need to appeal to all vegetarians and vegans. The question isn't, "if you gave a brainless chicken meat to a random vegetarian right now would they eat it?" The question is, "if you developed brainless chicken meat could you, with a few years of marketing, wide supermarket adoption, and cultural change, get a customer base to eat it and consistently buy it?"

Comment by frontier64 on Is top-down veganism unethical? · 2021-08-22T20:03:56.568Z · LW · GW

Yes I think that top down as you call it is a much more effective method than lab-grown meat. In fact my original understanding of lab grown meat, back when it was much moreso theory than actuality, was that top down modification of animals would be the goto method.

I think it will be a lot easier if you set the bar higher. Maybe just try to create animal breeds that will satisfy pescatarians. Creating chickens that have the same level of consciousness as fish is probably a lot easier than getting chickens that have absolutely no nervous system. This might cause many vegans to jump ship too.

And I don't think we need gene editing technology to create the types of animals we're looking for. As you showed with the evolution of chickens, we can already do a lot with selective breeding. I'm no geneticist but I wonder what you could do by selecting for lower electrical activity in the brain or even smaller heads. Or maybe just electing for general undirected behavior. Create a breed of really stupid passive chickens maybe. Actually this seems more unethical than continuing with the system we have today.

Comment by frontier64 on Gravity Turn · 2021-08-17T21:08:12.072Z · LW · GW

This seems like it's good advice for someone trying to become a career researcher, but is it really best to have so many career researchers? The prototypical physics grad student (more than a couple of my friends are those so I may just have a biased perspective) starts off with courageous ideas about how he's going to push science forward and restructure physics. But then he encounters the rigamarole of the whole process you describe in your post and it stops him from doing what he originally dreamed. He needs to get published. He needs to do original research. He needs to help his advisor and other professors do their research. He needs to do all of that because otherwise he won't be respected enough to actually have a career in physics research. But doing that kind of work isn't why he got into physics in the first place!

So the typical grad student either realizes that accomplishing his goal of restructuring quantum mechanics isn't in line with the practical necessity of having a career or he gets shunted out of academia because there's 100 other students who optimized their behavior towards becoming researchers and they all look better on paper than him. If none of the grad students optimized towards becoming career researchers and instead really focused on what's important to them this problem wouldn't exist, but the incentives are misaligned and it takes just a few defectors to force everybody else into defecting too.

The method you analogize to a gravity turn is highly optimized to turn grad students into career researchers, but it isn't optimized at all to push science forward in any meaningful way. The gravity turn analogy romanticizes the whole career researcher situation. Playing the game: becoming a respected researcher so you can earn a paycheck, have the respect of your colleagues, and occasionally do some effective work, that's not the dream.

Comment by frontier64 on Pedophile Problems · 2021-08-15T18:48:33.440Z · LW · GW

Your hyperlink to Jordan et al. got replaced by another copy of Landgren et al.

Comment by frontier64 on Importance of Ideas and People We Disagree With · 2021-08-15T16:00:47.332Z · LW · GW

This idea is very similar to Hans-Hermann Hoppe's covenant communities. Groups of property owners who all contract with one another to limit the kinds of tenants who live on their property to a particular group. This gives people who want to live in a certain kind of community the opportunity to do so. People who wish to live in total anarchy or a different kind of community only need to live outside of the covenant community.

You might like to read a few of his articles online and maybe pick up a copy of Democracy at a bookstore or library if you haven't already.

I should mention that I agree with the worldview you express here, I consider myself a Hoppean libertarian. It just so happens that discussing libertarian economic and social philosophy doesn't generate much interest on LW.

Comment by frontier64 on Transitive Tolerance Means Intolerance · 2021-08-15T09:00:11.639Z · LW · GW

Maybe you can solve this by just not caring about what Z believes in the first place? If you think his views are reprehensible so you support him being fired you're already in a failure state. This whole discussion of whether it's really ok to cancel A because he's friends with B and B is a Jew is actually a net negative. It just cements the idea that it's ok to cancel B in the first place. I picture the Soviet Politburo arguing that Beria's going a little too far sending his secret police to put the friends of political dissidents in the gulags.

These sorts of discussions move the Schelling point and never actually work as pushback towards the problem they're discussing.

People don't behave very differently based on their stated beliefs. There's White Nationalist programmers and there's Black Power programmers and they program about the same. Maybe they hang out with different people on the weekends and play different board games, but that doesn't matter because their job is programming. Nor does it quite make sense to fire either of them because some mentally handicapped people who spend all day on twitter decided to gang up on a programmer today.

Comment by frontier64 on The Future: Where are the Colors and the Sports? · 2021-08-15T00:13:10.211Z · LW · GW

Old photographs yellow.

Comment by frontier64 on Why Our Kind Can't Cooperate · 2021-08-10T01:34:45.802Z · LW · GW

I agree. I don't often say I agree for efficiency. You've made the point more eloquently than I could and my few sentences in support of you would probably strengthen your point socially, but it wouldn't improve the argument in some logical sense.

I love signaling agreement when I can do it and be just as eloquent as the writing I'm agreeing with. Famous authors put a lot of work into the blurbs they write recommending their friend's books. And that work shows. "X is a great summertime romp, full of adventure!" sure is a glowing recommendation, but it's not that eloquent and I can tell the author didn't put much time into writing it. Guess they didn't think X was worth the time to write a real nice blurb. But when a good author writes an interesting blurb for a book it gives me very high expectations.

I think this applies to ideas as well.

Comment by frontier64 on Cognitive Impacts of Cocaine Use · 2021-08-01T19:55:26.321Z · LW · GW

Better controlled studies found that cocaine dependent participants had mild cognitive impairment and structural differences; however, this was less than the cognitive impairment of alcohol dependent participants. Structural differences were less than psychopathological disorders such as schizophrenia.

This sounds pretty bad? Especially so because there's other stimulants out there without studies showing they cause cognitive impairment. "Better than being an alcoholic or schizophrenic" is not much of an endorsement.

Comment by frontier64 on Incorrect hypotheses point to correct observations · 2021-08-01T06:45:25.792Z · LW · GW

You're doing good work with the curation and it's very effective at bringing important posts back into the reader's eye so thanks for that! I would probably have never seen this post otherwise. I'm glad you're working on the system to iron out the kinks.

Comment by frontier64 on Incorrect hypotheses point to correct observations · 2021-07-31T05:26:16.632Z · LW · GW

Yes I also find the curation tagline to be awkward. The first few times I saw it I assumed that the original author had edited their post to include the curation comment at the start. I only discovered this was not the case by reading the comments here. Seems that editing another user's post is bad form. Adding your own comment to the start of their post is especially obtrusive, but in this case obtrusive is way better than non-obtrusive so that's a plus . The current curated tag should be enough for the website itself. Maybe the subscription e-mail can retain the curation notice while the original post remains unedited.

Unless the original authors are contacted and agreed to the curation notice being included at the top of their posts. Then everything is fine as it is.

Comment by frontier64 on Prediction-based-Medicine instead of E̵v̵i̵d̵e̵n̵c̵e̵ ̵b̵a̵s̵e̵d̵ ̵M̵e̵d̵i̵c̵i̵n̵e̵ Authority-based-Medicine · 2021-07-31T05:14:51.587Z · LW · GW

I think his point is that the same failure state Measure mentioned, doctors giving patients poison and correctly predicting outcomes, is just as likely as for the current clinical trial scheme.

Comment by frontier64 on Gaming Incentives · 2021-07-29T20:31:04.715Z · LW · GW

Saying that the studies are conflicting really misrepresents reality. I'm not trying to get at the object level argument here, but you brought it up and you terribly misrepresented it. It is very clear that biological men have a massive advantage over biological women in sports regardless of testosterone suppression. Even every single study referenced in the wiki article you link concludes that males have an advantage over females and that the only question is how significant that difference is with all studies concluding that it's significant and outcome determinative.

Comment by frontier64 on Importance of Ideas and People We Disagree With · 2021-07-29T16:41:37.888Z · LW · GW

You're advocating for two different positions here.

But in practical sense, you want there to be people who meet their love at 11 and spend their whole lives together and then you want there people who start their sex lives at 11 and die at 100 while having a threesome. [....] You want there to be people distributed on both ends of the spectrum, with most somewhere around the middle

On the one hand you say there should be a diversity of behavior. As you describe it you want people who are wholly monogamous and people who are polygamous because there are individual evolutionary advantages to each kind of behavior.

I wouldn't want nazis, communists, murderers or pedophiles enact their ideas

But for other areas where there can be diversity you say you just want a difference of ideas and opinions. And your justification for the diversity of some ideas is that it will make other people less likely to act on that idea. You're justifying a diversity of ideas by saying it will cause less diversity of behavior.

I see a conflict here not just in methods but in what the end goal itself is.

Comment by frontier64 on Revive Meetups? · 2021-07-29T14:39:57.727Z · LW · GW

Oh that's awesome. Thanks for the pingback. Yeah seems like second saturdays so hopefully 9/11.

Comment by frontier64 on Revive Meetups? · 2021-07-26T20:18:09.075Z · LW · GW

I'll also be moving back to DC at the end of August for school. I'm unsure about my living situation right now, but if it turns out I can I'd be happy to host a meetup for DC rationalists. There's also plenty of public private spaces in DC on university campuses and elsewhere that'd be perfect for a meetup.

Comment by frontier64 on What does knowing the heritability of a trait tell me in practice? · 2021-07-26T20:08:19.032Z · LW · GW

My understanding is that heritability is a measure of predictive ability. Meaning if a trait is 80% heritable and you want to guess whether or not Bob has that trait then you'll be 80% more accurate if you know whether or not Bob's parents have the trait than if you didn't have that information. Likewise, for a very low heritability trait like having 2 legs, knowing whether or not Bob's parents have two legs doesn't improve your guess much if it improves it at all.

As you mentioned, environmental factors can at times subsume the genetic factors (e.g. heritability of height can be subsumed with very low nutrition). So if environmental factors for the dataset that you're trying to predict from are significantly different from the factors which you used to determine heritability, then the heritability estimate may not be as accurate and it should be reassessed for the different factors.

You can still make very concrete predictions about traits based on heritability even though very different environmental circumstances could reduce how heritable the trait is. Heritability of IQ has been determined in an environment exceedingly similar to that faced by kids in the modern schooling system. Let's say IQ has shown to be 80% heritable in circumstances not dissimilar to the US school system (as far as I am aware this is the current state of the art). Now if you want to predict the IQ of 20,000 parents of 10,000 US schoolchildren you'll do 80% better if you know the kids IQ than if you were just guessing randomly. Similarly, if you know Bob is smart you should update your prior estimate that Bob's parents are also smart significantly in favor of their intelligence.

Comment by frontier64 on Working With Monsters · 2021-07-25T17:40:42.736Z · LW · GW

Sticking out your neck is only a virtue if it ends up giving you greater expected utility than following the social norm. Sticking out your neck because you like the idea of yourself as some sort of justice warrior and ruining your entire life for it is the non-rationalist loser's choice.

The point of Johns story is that both Red and Judge are better off working together than they would be if they fought, even though they strongly disagree on the scissor statement. Fighting would in effect be defecting even when the payoff from defection is lower than the payoff from cooperation. This is basically how all of society operates on a daily basis. It's virtually impossible to only cooperate with people who share your exact values unless you choose to live in poverty working for some sort of cult or ineffective commune.

What makes Judge and Red special is that they have a very advanced ability to favor cooperation even when they have a strong emotional gut reaction to defect. And their ability is much greater than that of the general populace who could get along with people just fine over minor disagreements, but couldn't handle disagreeing over the scissor statement.

Comment by frontier64 on Working With Monsters · 2021-07-25T17:31:09.674Z · LW · GW

It's an understanding that working together is better for both Judge and Red's individual utility functions than is fighting against each other. Call if moral relativism if you want, but it's more accurate to call it a basic level of logical thinking. Rational moral absolutists can agree that it makes no sense for Judge and Red to fight and leave each other either dead or severely injured rather than work together and be significantly better off.

Comment by frontier64 on My Marriage Vows · 2021-07-24T23:26:04.802Z · LW · GW

Yes I do think you should follow your vows to the letter even if your spouse is breaking them egregiously. I have strong feelings about this, but I'm not sure if I have a good explanation as to why. Its my general feeling that you really shouldn't be able to consider any sort of exit plan for a marriage. Of course you definitely do need an exit plan, but it shouldn't be something that you're aware of until it's necessary.

A marriage is different from a typical mutually beneficial contract. A marriage should partially realign the husband and wife's utility functions such that expected utility for one spouse counts for substantial expected utility to the other spouse. So unless your spouse is behaving so egregiously that you're losing enough expected utility from the marriage to put you below your disagreement point, violating your vows shouldn't come into play. But of course at that point you would be considering divorce anyway if you thought the situation couldn't be fixed while you remain in the marriage. I think that's the crux of it for me: if you don't have breaking your vows or divorce on the table you'll really try to fix whatever issues you have in the marriage (if there are issues) before you have to go nuclear.

As I've said I don't quite understand my own position in a straightforward sense so don't give it too much weight. I'm not sure if my explanation for why is really rational or just a rationalization.

Thanks for the post and congratulations!

Comment by frontier64 on My Marriage Vows · 2021-07-22T02:45:33.228Z · LW · GW

I think modeling yourselves as agents for the purpose of the vows is a good idea. It'll both reinforce agent-like behavior and form a stronger commitment between you and your spouse.

I have a couple of minor quibbles. For the Vow of Honesty I think you should keep the vow as it is in public, but privately commit to full honesty with your husband disregarding agreements with third parties. You should not be bound to keep a secret from your spouse even if it fits under the Vow of Concord and you were sworn by a third party. If you are committed to honesty it should be a full absolute commitment instead of a commitment with a very difficult to achieve exception. But third parties will be less likely to ever share information with you in confidence if you publicly commit to not ever keeping a secret from your spouse. Having a separate public/private vow of honesty gives you the best of both worlds.

I have 2 corrections for this line, "These vows are completely sincere, literal, binding and irrevocable from the moment both of us take the Vows and as long as we both live, or until the marriage is dissolved or until my [spouse]’s unconscionably[2] breaks [pronoun]’s own Vows..." Firstly I think that " 's " is a grammar mistake and it should just read "...or until my [spouse] breaks..." instead.

Also I think that out should be removed even if it's made grammatically correct. Allowing yourself to cancel following your vows because your spouse willfully stopped following theirs is a little dangerous. It leads to situations where you might rather justify your own breach of the vows by pointing to their breach instead of trying to make things right. This is an issue in contracts sometimes where one side wants to be able to prove the other committed a material breach so they have the insurance policy that they can cancel the contract whenever they want to. You would never want to be in a situation where you want your spouse to break their vows so you can feel ok breaking them yourself.

Comment by frontier64 on Is the argument that AI is an xrisk valid? · 2021-07-21T21:19:45.829Z · LW · GW

General intelligence doesn't require any ability for the intelligence to change it's terminal goals. I honestly don't even know if the ability to change one's terminal goal is allowed or makes sense. I think the issue arises because your article does not distinguish between intermediary goals and terminal goals. Your argument is that humans are general intelligences and that humans change their terminal goals, therefore we can infer that general intelligences are capable of changing their terminal goals. But you only ever demonstrated that people change their intermediary goals.

As an example you state that people could reflect and revise on "goals as bizarre ... as sand-grain-counting or paperclip-maximizing" if they had been brought up to have them.[1] The problem with this is that you conclude that if a person is brought up to have a certain goal then that is indeed their terminal goal. That is not the case.

For people who were raised to maximize paperclips unless they became paperclip maximizers the terminal goal could have been survival and pleasing whoever raised them increased their chance of survival. Or maybe it was seeking pleasure and the easiest way to pleasure was making paperclips to see mommy's happy face. All you can infer from a person's past unceasing manufacture of paperclips is that paperclip maximization was at least one of their intermediary goals. When that person learns new information or his circumstances are changed (i.e. I no longer live under the thumb of my insane parents so I don't need to bend pieces of metal to survive) he changes his intermediary goal, but that's no evidence that his terminal goal has changed.

The simple fact that you consider paperclip maximization an inherently bizarre goal further hints at the underlying fact that terminal goals are not updatable. Human terminal goals are a result of brain structure which is the result of evolution and the environment. The process of evolution naturally results in creatures that try to survive and reproduce. Maybe that means that survival and reproduction are our terminal goals, maybe not. Human terminal goals are virtually unknowable without a better mapping of the human brain (a complete mapping may not be required). All we can do is infer what the goals are based on actions (revealed preferences), the mapping we have available already, and looking at the design program (evolution). I don't think true terminal goals can be learned solely from observing behaviors.

If an AI agent has the ability to change it's goals that makes it more dangerous not less so. That would mean that even the ability to perfectly predict the AI's goal will not mean that you can assure it is friendly. The AI might just reflect on its goal and change it to something unfriendly!


  1. This paraphrased quote from Bostrom contributes partly to this issue. Bostrom specifically says, "synthetic minds can have utterly non-anthropomorphic goals-goals as bizarre by our lights as sand-grain-counting or paperclip-maximizing" (em mine). The point being that paperclip maximizing is not inherently bizarre as a goal, but that it would be bizarre for a human to have that goal given the general circumstances of humanity. But we shouldn't consider any goal to be bizarre in an AI designed free from the circumstances controlling humanity. ↩︎

Comment by frontier64 on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2021-06-19T19:35:50.525Z · LW · GW

It seems like there's something missing here and I don't know how to add it. You make your childhood behavior of not being upset over things sound bad through framing, but you don't offer many (or maybe any) examples of it being ineffective. You mention that more recently you've been experiencing a sense of general malaise on the weekends, but the extent of that problem isn't clear nor is it obviously linked to the fix it mentality. Many people have malaise on the weekends and sometimes that's just because they're tired from the week and need to recuperate. I don't think moving away from a major life strategy is a good response to experiencing weekend malaise unless you have a very good reason to believe they're connected.

I only make this comment because I too practice the "fix it or stop complaining about it" method and don't find many problems with it. I don't think the angry parent slapping their kids framing is accurate. Stop complaining doesn't mean mentally slap yourself every time a negative emotion comes up. It means OODA loop a bit, and if you realize fixing the problem is going to be worse overall than not fixing the problem and suffering the consequences, suffer the consequences lightly because complaining will make you feel worse. Kid comes up to their parent and says, "I'm hungryyyyyyyyyyyyyyy."
"Ok, well when was the last time you ate? Can you get a snack here?"
"No we're in the car and I just ate our last snack."
"Well would it be better for us to take a 15 minute detour and get some more snacks or suffer the hunger a little bit and eat a nice meal in 30 minutes at home?"
"That's right, I'll wait until we get home."
This framing is more in line with how I view "Fix it or stop complaining about it."

I think this post would greatly benefit from explaining how "Fix it or stop complaining about it" didn't work for you. Maybe you have in later writings, but I'm not quite sure how to find them because I don't see any relevant pingbacks.

Comment by frontier64 on What will 2040 probably look like assuming no singularity? · 2021-05-23T15:12:49.466Z · LW · GW

Mandated Gene Therapy

We're trending towards health and medical decisions being looked at from a societal perspective rather than on the individual level.[1] . People who use alternative medicine are increasingly shamed not only for the effect their choice has on their own health, but for the effect it has on the health of others and the financial burden it puts on the medical system.[^2] Medical interventions later on are more costly therefore those 4 months you tried on herbal remedies hurt everybody who has to pay for your medical treatment. Refusing a vaccine not only increases burden the medical system will have taking care of you, but increases the risk that others will also get infected.

Gene therapy, specifically editing the genes of newborns, is the archetypal preventative medical procedure. Parents who have a baby they know will more than likely have a genetic disease and likely be an extra burden on the medical system will be shamed for that decision and the solution will be gene therapy.

That shame will be turned into laws. The natural extension of gene therapy laws for preventing known high likelihoods of genetic mutation will be gene therapy to prevent speculative risk and then just possible risk.

Privately-Owned Nukes

Honestly this doesn't even require improvements in nuclear tech. The only necessary ingredient is a couple of smart people joining a terrorist organization that wants to cause mass destruction and has the disposable resources of a small business. The design of nuclear bombs is freely available online, the actual engineering process is more arcane, but still learnable. The hardest part of the process is acquiring enough weapons grade uranium or plutonium. But even those can be made from scratch with access to a mine (even though spy movies always focus on the terrorist's stealing their nuclear material). So my first lemma is that even though it hasn't happened yet, it's pretty easy for a small group to create a nuclear bomb.

What's been holding private nuke construction back is a lack of impetus and general ineffectiveness of terrorists. But that's not a real bar to the end result. Over time there likely will be a statistical outlier terrorist organization that has a few smart people and the desire to construct nuclear bombs. And for them it will be easy.


  1. Taxpayer funded healthcare is the norm. Politicians talk about the opiod crisis and blame doctors for over-prescribing, people protest drug companies because they raise prices too high, a few national and international organizations have been setting the global policy on infectious disease handling for over a year now ↩︎

Comment by frontier64 on What will 2040 probably look like assuming no singularity? · 2021-05-18T09:41:53.778Z · LW · GW

The constant improvements in nuclear tech will lead to multiple small terrorist organizations possessing portable nuclear bombs. We'll likely see at least a few major cities suffering drastic losses from terrorist threats.

Gene therapy will be strongly encouraged in some developed nations. Near the same level of encouragement as vaccines receive.

Pollution of the oceans will take over as the most popular pressing environmental issue.

Comment by frontier64 on Your Cheerful Price · 2021-02-15T06:01:25.830Z · LW · GW

I think many people view friendship as a form of alliance. Ally friends perform favors for each other as a way to tie tighter bonds between them and signal that their goals are aligned. I want to bake you a cake for exactly $0 because baking a cake will help you and I want what's best for you so helping you directly helps me. So in the future, after I bake you your cake, you of course will drive me to the airport because that would help me and you want what's best for me right? It's not a direct scratch-my-back-and I'll-scratch-yours exchange of favors, it's developing a strong alliance between our interests. We can then rely on that alliance for mutual assistance in the future. The two most common danger ally-friends are on the lookout for are 1) over-reliance by their friend; and 2) mere burden shifting from their friend.

  1. Over-reliance is when Bob always asks his lawyer friend Alice for legal advice and for her opinion on complicated topics. Alice spends hours of her time (that she could otherwise use to bill $400/hour) on these favors yet Bob doesn't provide her even half of the value that she gives him. Bob's reliance on Alice is still efficient, it's much easier for her to do the legal research than him, but Bob is not putting in enough to get what Alice is giving him. Alice will eventually grow resentful of Bob and stop doing favors for him entirely.

  2. Burden shifting is when Alice and Bob are both friends of equal cooking ability yet Alice still asks Bob to cook her cakes. The amount of effort expended by either to make the cake is exactly the same so Alice having Bob cook is no more efficient for the alliance than if she cooked the cake herself. Bob notices this and asks why Alice doesn't cook the cake herself. If Alice can convince him that somehow it is more efficient for Bob to cook the cake the alliance can continue. If Bob can't be convinced he will stop cooking cakes because why the hell was he even cooking them in the first place?

But attempts to pay an ally friend for their favors is a whole other unexpected issue that can even seem like betrayal. Ally friends would dislike your way of offering them money in exchange for a favor because that would imply that when they seek a favor from you, you would expect money in return! Then to them there never was any alliance between you at all. From their perspective, you offering them money in exchange for a favor is tantamount to admitting that you were actually just pretending to be their friend the whole time.

Comment by frontier64 on How do I improve at being strategic? · 2021-02-07T01:46:46.343Z · LW · GW

I'm glad you appreciate the advice. It seems to me that you've developed a very effective, structured way to improve your productivity and I'm going to try to emulate your strategy here with a few upcoming projects I have to work on and see how efficient I'm being.

Comment by frontier64 on The 10,000-Hour Rule is a myth · 2021-02-05T15:24:03.196Z · LW · GW

I find this to be a severely lacking refutation of Gladwell's point. The main argument being that Ericsson, who collected the data which Gladwell cites to, disagrees with his point. Seeing that the average expert has 10,000 hours of practice in their field a reasonable conclusion is that you should try to practice 10,000 hours if you want to become an expert. Just because Ericsson disagrees with that doesn't mean it's not a perfectly reasonable conclusion.

Comment by frontier64 on How do I improve at being strategic? · 2021-01-21T18:19:23.476Z · LW · GW

The first step that Anna points out is "Ask ourselves what we're trying to achieve" or in other words, know your goal. Since you have a desire to be more strategic you probably already have a goal in mind and realized that being more strategic would be an effective subgoal. From the rest of your post I think you've substantially worked on some of the other steps as well.

If you're struggling fulfilling the rest of the steps Anna laid out my recommendation is to just do things which may work towards achieving your goal that are very outside your comfort zone. That will pull you out of your pre-existing habits and get you to start evaluating different strategies instead of continuing to follow the strategy you've already worked yourself into.

If you're a procrastinator, start working on something that's a long term goal immediately for at least a few hours without breaks even if you start to think it might not be effective. If you think it's not effective that may be because of akrasia taking over once you actually start working on it.

If you are fearful of offending people go to an online or in person marketplace and start low-balling people with ridiculous offers and continually press them to make a deal favorable to you. Make the situation uncomfortable enough and you'll realize you have the ability to deal with the social awkwardness when you're trying to work towards your goal.

This is Anna's step e and I encourage working on this step because from your post it seems like you've already put good work into everything that comes before it.

My bad if this is more of tactics rather than the strategy tips you were looking for.

Comment by frontier64 on Saying "Everyone Is Biased" May Create Bias · 2021-01-21T16:57:06.154Z · LW · GW

This formulation of evidence completely disregards an important factor of bayesian probability which is that new evidence incrementally updates your prior based on the predictive weight of the new information. New evidence doesn't completely eradicate the existence of the prior. Individual facts do not screen off demographic facts, they are supplementary facts that update our probability estimate in a different direction.

Comment by frontier64 on RationalWiki on face masks · 2021-01-15T18:36:27.081Z · LW · GW

Your point would be correct if the recent bans were about hate speech and calls to violence. The claim that recent bans were solely about hate speech and calls to violence however is factually incorrect and therefore your point is wrong. The most popular banned topic of discussion is the validity of the 2020 election, an epistemological question. Very nonviolent and non-hatey figures such as Ron Paul are banned without any stated reasons.

Comment by frontier64 on The 4-Hour Social Life · 2020-12-30T04:22:02.827Z · LW · GW

Easier solution: wait until a person who is following Isusr's strategy weeds you out and bam you have your equally extraordinary match. The only failure states are when Isusr's strategy doesn't manage to distinguish the extraordinary people they're looking for from everyone else, or when you're not extraordinary.

Comment by frontier64 on How to reliably signal internal experience? · 2020-12-28T05:55:23.559Z · LW · GW

I think knowing about the actual object level problem here would help in crafting a suitable solution. My main question is why are you informing your friends that you're at your limit?

Are you participating in some group activity (e.g. going to the gym) that you feel you have to drop out of? If so I strongly recommend just working through the pain until what's stopping you is no longer pain winning over willpower but physical incapability to proceed. At that point you don't even need to tell your friends you're at your limit because no matter what you're going to flop to the ground unable to continue with the activity. You clearly want to do the group activity, because you haven't even posited quitting as an option, so rely on your decision to do the group activity and trust that you're not going to cause any lasting harm to yourself by working through the pain.

If you're not participating in a group activity (e.g. you had to take off sick from work and you told your friends about it the next day) I see good reasons to not inform your friends that you're at your limit at all. You know what their expected response is, and you don't think that expected response is helpful. So might as well just not go through the routine that will give you the bad response.

Comment by frontier64 on The Power to Demolish Bad Arguments · 2020-12-26T22:11:43.413Z · LW · GW

I don't understand your usage of the term "hanging a lampshade" in this context. I don't think either Steve's or Liron's behavior in the hypothetical is unrealistic or unreasonable. I have seen similar conversations before. Liron even stated that the Steve was basically him from some time ago. I thought hanging a lampshade is when the fictional scenario is unrealistic or overly coincidental and the author wants to alleviate reader ire by letting them know that he thinks the situation is unlikely as well. Since the situation here isn't unrealistic, I don't see the relevance of hanging a lampshade.

If the article should be amended to include pro-"Uber exploits drivers" arguments it should also include contra arguments to maintain parity. Otherwise we have the exact same scenario but in reverse, as including only pro-"Uber exploits drivers" arguments will "automatically [...] generate bad feelings in people who know better the better arguments". This is why getting into the object-level accuracy of Steve's claim has negative value. Trying to do so will bloat the article and muddy the waters.