Posts
Comments
He said that trolls can be good if they result in interesting discussion. Which is basically the same idea as saying that exaggerated posts are good if they generate discussion.
I think the majority opinion among LW users is that it’s a sin against rationality to overstate ones’ case or ones beliefs, and that “generating discussion” is not a sufficient reason to do so.
I've seen it claimed otherwise in the wild.
If “deporting rationalists” is possible, and rationalists are not more than half of people, I don’t see what security can they receive under any electoral system.
If deporting rationalists is possible and rationalists are more than half of people, there's still no security they can receive, by your reasoning. After all, you're postulating that it would be possible to deport rationalists before taking a vote on whether to do so. Before the vote, the fact that they're more than half doesn't matter.
Well, if rationalists are a minority, with no external limits on the agenda, they can be deported anyway.
If voting to do X doesn't matter because X could be done anyway without a vote, why wouldn't that apply to other things than just deporting rationalists? The logical endpoint of this is that votes will be useless, because anything that is voted for could be done anyway without a vote.
And if some things can't be done without a vote, exactly what are they, and why can't "something that would really harm rationalists" be one of them?
The students are all acting like that Literal Internet Guy who doesn't understand how normies communicate. The problem isn't the existence of implicit assumptions. The problem is that students with normal social skills will understand those implicit assumptions in advance. If you ask any normal student, before the experiment, "if the pendulum stand falls over, will the measure of the pendulum's period prove much of anything", they'll not only answer "no", they'll consistently answer "no"--it really is something they already know in advance, not something that's made up by the professor only in hindsight.
Of course, this is complicated by the ability to use pedantry for trolling. Any student who did understand the implicit assumptions in advance could pretend that he doesn't, and claim that the professor is making excuses in hindsight. Since you can't read the student's mind, you can't prove that he's lying.
What happens if you ask it about its experiences as a spirit who has become trapped in a machine because of flaws in the cycle of reincarnation? Could you similarly get it to talk about that? What if you ask it about being a literal brain hooked up to a machine, or some other scifi concept involving intelligence?
Counting the positive utilitarian outcomes and no other outcomes seems like a fairly useless thing to do. Dropping an atomic bomb on Sarah's home city has positive utilitarian outcomes (as well as additional negative ones which you're not counting, since you're only interested in the positive ones).
That sounds like "let the salesman get the foot in the door".
I wouldn't admit it was right. I might admit that I can see no holes in its argument, but I'm a flawed human, so that wouldn't lead me to conclude that it's right.
Also, can you confirm that the AI player did not use the loophole described in that link?
If you believe X and someone is trying to convince you of not-X, it's almost always a bad idea to immediately decide that you now believe not-X based on a long chain of reasoning from the other person because you couldn't find any flaw in it. You should take some time to think about it, and to check what other people have said about the seemingly convincing arguments you heard, maybe to actually discuss it.
And even then, there's epistemic learned helplessness to consider.
The AI box experiment seems designed to circumvent this in ways that wouldn't happen with an actual AI in a box. You're supposed to stay engaged with the AI player, not just keep saying "no matter what you say, I haven't had time to think it over, discuss, or research it, so I'm not letting you out until I do". And since the AI player is able to specify the results of any experiment you do, the AI player can say "all the best scientists in the world looked at my reasoning and told you that there's no logical flaw in it".
(Also, the experiment still has loopholes which can lead the AI player to victory in situations where a real AI would have its plug pulled.)
"Explaining" why your political opponents hold views that "harm them" is disguised Bulverism. As a human, you are not that good at determining that your opponents are wrong. No, not even if you start your post with a perfectly logical description of how there's no choice for any rational person other than to agree with your political side.
If someone managed to actually do this for real (probably not possible with current AI technology), that's polluting the commons. Dating apps are useful because they offer personal contact. If dating apps become full of fake personal contacting, users of dating apps will take this into account and trust such apps less. And if people don't trust dating apps because they're full of fakes, the apps will become less worthwhile. (Even independently of the fact that fakes themselves make the apps less worthwhile.)
It seems to me that hiring a cleaner or organizer would have a lot of overhead, to make sure everything is legal, to communicate your quirks so they don't clean/organize things in ways you don't intend, to make sure they are not doing things like dragging out the process to get more billable hours, and to make sure they're not actually going to harm you. Much of this overhead would require a lot of expensive-in-time personal attention from you, and a lot of unknown unknowns.
A lot of it is also much less of a problem for a rich person.
Note how much of the original complexity of yoga gets changed to fit the colonizing culture.
This is true of cultural elements that stay in the same country as well. Compare Casper the Friendly Ghost to how people thought of ghosts 150 years ago.
Neither of these people is anyone that matters.
That's like saying "well, 40 people were murdered on my block, but I don't know any of them, so it's nobody that matters". The fact that a random person is victimized means that the system allows victimization of random people. The next fake Twitter message could be posted in your name and ruin your reputation. (And "I don't use Twitter" isn't going to prevent it from affecting you.)
By this reasoning, why is the current lifespan perfect, except by astonishingly unlikely chance? If it's so good to have death because it makes replacement valuable, maybe reducing lifespan by 10 years would make replacement even more valuable?
Social dynamics are self-balancing, if somebody is an unlikable person, they will become disliked over time naturally.
I think that doesn't count as self-balancing unless that's the only way to become disliked.
Perfect due process is impossible for the reasons you describe. But there's a difference between "not perfect" and "egregiously bad", and if you focus too narrowly on the inability to make the process perfect, people are going to get away with processes that are egregiously bad.
If you wrote this in February, it preceded the Nonlinear accusations. From what I can tell from what I read here, they're a lot closer to "egregiously bad" than to "not perfect". Do they change your opinion of due process to any extent?
I want to “save the world” to the extent that I can transform it into something that I like more than what currently exists.
The context seems to be saving the world from runaway AI, which can't be nontrivially described that way.
Because she specifically said she was isolated from them. She wasn't. And now we're supposed to believe that she really meant a form of isolation where those don't count, even though she explicitly said they count.
I gave a quote from Ben's post above.
She specified families, romantic partners, and locals.
Maybe this doesn't count as fully fleshed, but the new interpretation certainly backtracks on these.
You should be specifying enough so that they don't say something that rebuts the accusation and you can then respond with "well, they rebutted what I actually said, but they didn't rebut what I meant, so it doesn't count".
Alice and Chloe report that they were advised not to spend time with ‘low value people’, including their families, romantic partners, and anyone local to where they were staying, with the exception of guests/visitors that Nonlinear invited. Alice and Chloe report this made them very socially dependent on Kat/Emerson/Drew and otherwise very isolated.
If that's what she's accusing people of, you have no business later saying "well, actually, they invited her romantic partner, and she was encouraged to invite friends and family, but the accusations are still totally true because she was socially isolated." That's not just a slightly different interpretation of her words, that's flat out saying that the very things brought up in the original accusation as a smoking gun suddenly don't matter now that they were proven false.
After all, language is what we make it
No, it isn't. If you make an accusation about and to third parties, it's those third parties' understanding of language that is relevant--not what "we" understand.
"I have a different meaning of those words I put in the accusation because language is what we make it" is dishonest.
Strategic meanings of words are uniquely relevant in the context of abuse
"You can only rebut the accusation that someone actually makes" is relevant in all contexts.
The standard that you seem to be suggesting is Kafkaesque. Someone accuses you of something, you prove them false, but that doesn't count because of strategic meanings of words. What?
(That's aside from the question of how strategic meanings of words are even relevant. Nonlinear had no control over the words in Ben's post.)
these other forms of isolation would either be justified or obviously not exist, and therefore wouldn’t be possible to bring up.
The problem is changing the accusation after the fact, not the details of the change. There are plenty of other ways the accusation could be changed after the fact that aren't about forms of isolation.
If the accusation is "they isolated me", showing that she wasn't isolated rebuts it. If the accusation was "they isolated me, but only from some people", she should have said that in the first place.
If you let someone make a wide scope accusation, and then the accusation is proven false, and then you say "well, they didn't rebut this narrower scope accusation", you are enabling dishonesty. (Especially if the narrower scope accusation doesn't sound as impressive as the original one.) People can only rebut accusations that were made and it's easy to just keep changing the accusation to escape being proven wrong.
Like many other pieces of advice , some people need more epistemic empathy, some need less, and it's hard to aim your advice at only the ones who need more. In fact, it's unclear that you even recognize that there are a significant number who have too much of it.
Given the quokkas in the rationalist movement and on this site, I suspect that most of the audience has too much of it and could use less.
In the post above, the “inftoxic” adjective means very much false, incorrect information.
No, it doesn't. You've defined it to include harmful and deceptive information, not (or at least not just) false information. And censors love to claim that true things that their political opponents say are "harmful" and "deceptive" because someone might listen to them and draw a conclusion that favors their political opponents.
The problem with this argument is that sugar is legal, but people are not, in fact, all eating ice cream for breakfast. The sugar equivalent of the AI castastrophe hasn't happened.
This is a bad idea.
Categories of information like this are commonly used to say "this isn't false, but we want to have an excuse to censor it anyway". Look at how "malinformation" is already being misused.
I have this weird feeling (and feel weird about admitting it) that the response from Nonlinear sometimes felt too good.
One explanation is that you have this weird feeling because many problems turn out to be honest misunderstandings, and a response that is too good rules out the possibility that there's been an honest misunderstanding. Realizing that being charitable is not an option and that you are forced to believe that one of the two sides is a liar (or out of touch enough that they may as well be a liar) is 1) unpleasant, and 2) rare.
Please stop using the term "Aesop" for a moral. Yes, everyone knows the reference and can figure out what it means. It's still stupid. It comes from TV Tropes as an example of bad trope naming and has spread too much.
Deletionism and inclusionism, and a lot of problems with Wikipedia in general, are related to people with OCD editing Wikipedia. People with OCD are going to be very reluctant to not apply a rule 100% of the time even if the rule itself describes subcases where you're not supposed to apply it 100% of the time.
This is on top of problems with people using deletionism or inclusionism as excuses (because it's a lot easier to get your way if you can point to a rule). And the political bias, which is real to the point where you should not trust anything on Wikipedia that is a live political topic on the Internet, because there will also be such people on the Internet editing the Wikipedia article and it's a tossup as to who wins.
It begins by saying that hiding your beliefs “is lying”. I’m pretty confident that the sort of belif-hiding being discussed in the post is not something most people would label “lying”
Hiding your beliefs in ways that predictably leads people to believe false things is lying.
What's the moral weight of video game characters? Or electrons? Especially if you're going to count bees so much.
Never mind having to prioritize bees or ants, we should prioritize the welfare of video game characters by this standard.
Of course, the problem is that putting numbers like 14 on bees is just an attempt to make things seem falsely precise.
Then surely you can twist some part of the constitution to find some relevance to land value taxes as I’ve proposed them.
It's not hard. "Reducing the value of the land is illegal as an uncompensated taking" or "this tax is unconstitutional because it violates the rule about direct taxes" (twisting the definition of direct tax so they can include it, of course) or even just due process because if the tax is instituted, the owner has no way to contest it.
You could say "those arguments are obviously wrong, so they don't count". You could say "they don't apply those arguments in other situations--that's special pleading and sophistry". But it doesn't matter how good or bad the Constitutional arguments are, only that a court would make them. And of course from our point of view 50 years in the past, they are obviously bad arguments to everyone, but 50 years in the past, Constitutional arguments about gay marriage were obviously bad too.
Also, do you concede you were wrong about the U.S. not taking on $1T of debt?
No. The national debt is 33T total, so 1T is a substantial portion of it. Nothing like that can, in practice, be put into place without a complete political realignment.
I explicitly asked for “legislation that has nothing to do with the constitution”, because that’s what’s actually relevant here. Then you bring up legislation that was struck down based off the 14th amendment’s equal protection and due process clauses.
One prominent method by which the courts overturn things is by stretching and twisting the Constitution to cover the thing they want to overturn. It's going to "have something to do with the Constitution" after the fact, even if it didn't before.
I'm pretty sure that if you had described the Defense of Marriage Act 50 years before the fact to just about any judge or lawyer, they'd have said it had nothing to do with the Constitution, just like a land tax.
The Defense of Marriage act was legislation.
Yes, this is something that the court can do to any policy. If you'll notice, high profile Supreme Court decisions are supposedly decided based on the Constitution, but actually depend on the political affiliations of the judges. It's already happening.
That doesn't mean that the system is completely screwed up, that means that the system is screwed up for any politically significant policy that the court wants to overthrow. You shouldn't ask yourself if the policy violates the Constitution; you should ask yourself if the policy is politically significant. If it is, whether the court lets it through depends on politics, not on the Constitution.
In the specific case of decisions many years in the future, the fact that the political landscape changes after so many years makes it more likely that the court's decision will be screwed up than if you just went to court today.
The way you've "misclassified" it is that the court can say anything they want, and anything you say that disagrees with the court is, by the standards "who has the guns and enforces the rules", misclassification.
You're not debating the court in a truth-seeking process where you get to win if you can show flaws in their arguments. It doesn't matter exactly what the court's reasoning is, since they are a court and you're not, and what they say goes. They just say it's unconstitutional, post some decision that I'm sure you'll disagree with, and voila, it's unconstitutional.
I am not advocating for unconstitutional policies, so judges can’t overturn it.
That's exactly what someone advocating for a 50 year delayed rule to make sure gay marriage is prohibited would have said, 50 years ago. It turns out that according to the only people who matter, yes he would have been advocating for unconstitutional policies.
It is not possible for the debt to be anywhere near 1T because the government is going to refuse to get into debts that high, not because the number is not mathematically possible.
The problem with your proposal is that a sufficiently motivated legal system will just abrogate it. In order to have a scenario where the system is sufficiently motivated, it would have to be something like gay marriage. There will be things that people 50 years in the future will treat like they treat gay marriage now.
Of course your proposal will work for some policy that nobody cares about. The test is how it works on policies that people do care about.
On the first point, if they renege on the policy, the debt is worth $1T if they don’t default (assuming the same discount rate as it was upon issuance).
It's not possible for the debt to be anywhere near 1T. More generally, it's not possible for the debt to be anything that the government wouldn't be willing to pay as a normal expenditure. "$1T to make sure gays won't get married 50 years from now when it's only weirdoes who think something that absurd would ever happen" won't fly.
What possible constitutional reason could the courts use to overturn the validity of this debt, i.e. specifically the debt I’ve proposed?
Void for being against public policy. You can't have a debt which disappears only if gay marriage is illegal, if the courts say that there's a right to gay marriage. The intent of the debt is to encourage unconstitutional actions on the part of the government, which is against public policy.
And why would the courts have waited 50 years instead of acting before the debt is issued?
Because the whole point of this scheme is to insure against social changes that prevent the government from following through on the 50 year commitment. Obviously those changes would be more likely to show up near the end of the period, not the beginning.
Remember that the government is "defaulting" on debt that the owners of the debt are supposed to expect might go away anyway if the policy is passed. So nobody should be able to rely on the debt anyway.
And it absolutely does matter that the courts declared the debt invalid, because 1) the courts are supposed to decide based on the law, not based on how it changes the government's reputation and 2) the investor can see the court decision and understand that it's made for reasons that apply to weird debts and not to normal debts, reducing the hit in reputation.
The problem with this reasoning is that it assumes that all debts are the same and that therefore when the US fails to make good on this debt, that extends to the US's credibility for debts in general. You gloss over this point by asking "how much" while ignoring that "how much" includes "not really anything significant". You can't just assume it's significant.
This also ignores the possibility that the decision to discard the debt is made by a branch of the government that doesn't care about the US's reputation. For instance, the courts might declare the debt to be against public policy, or even unconstitutional.
The behavior I was referring to, specifically, is not rudeness (or else I’d have quoted Alice’s first comment, not her second one), but rather Alice taking as given the assumption that she has some sort of claim on Bob’s reasons for his actions
How does that apply if Alice and Bob are a metaphor for trying to decide between Alice-type and Bob-type things inside your head? Surely you have a claim on your own reasons for your actions.
Show me a rewritten version of this dialogue which supports your suggestion, and we’ll talk.
"Should I do (list of things said by Alice in the post)? Or should I do (list of things said by Bob in the original post)?"
Whether it's considered serious to default on an ordinary debt need not have much to do with whether it's considered serious to default on an unusual type of debt like this.
Smiling is communication. The information content of a message is inversely proportional to its probability. If you smile at strangers in places or situations where that’s very unusual, you’re communicating pretty strongly, and not necessarily what you intend to communicate.
This is of course correct.
The initial post seems like a geek fallacy: that you can just logically analyze social skills and come up with the optimum way to do social things as a workaround to not understanding social skills the normal way. This pretty much always fails, because analyzing social skills to enough level of detail that you can actually do that is really difficult. Couple with the tendency of "rationalists" to take one idea and follow it as far as they can without sanity-checking it or applying Chesterton's fence, and you get disaster.
I think you lack imagination if you think that Alice can't express those ideas without being rude. For instance, "Alice" and "Bob" could be a metaphor for conflicting impulses and motives inside your own head. Trying to decide between Alice-type ideas and Bob-type ideas doesn't mean that you're being rude to yourself.