Posts
Comments
I’ve spent endless hours on message boards for [...] Slatestarcodex readers; neither of which ever really discuss [...] Scott Alexander’s writing, but rather, are just hubs for the types of oddballs who like [...] Slatestarcodex to talk about stuff that people with these personality traits like to talk about.
From the other side, this probably also explains why I don't like the SSC/ACX related message boards.
ACX has much wider audience than LW, so "the kind of person who reads ACX" reduces to something like "an intelligent person who identifies as a contrarian and enjoys reading long texts", which may be a group that happens to include me, but it also includes many people I prefer to avoid.
I like the fact that Scott writes about different topics, but the downside is that now neither of those topics works as a hard filter. For example, whenever Scott directly or indirectly mentions effective altruism, some people are going to write in the comments how the entire idea is stupid. (That irritates me a lot; even if I am not an EA myself, doesn't mean that I am a fan of conspicuous talking smack about altruism in general.) So why do they keep reading the blog? Because there are also many articles on other interesting topics. So if you visit the message board, you will still find those people, but you won't find Scott there to balance their negativity.
Offline ACX meetups are okay though. Apparently being able to walk away from the computer is a hard filter.
I think the people who say such things don't really care, and would probably include your advice in the list of quotes they consider funny. (In other words, this is not a "mistake theory" situation.)
EDIT:
The response is too harsh, I think. There are situations where this is a useful advice. For example, if someone is acting under peer pressure, then telling them this may provide a useful outside view. As the Asch's Conformity Experiment teaches us, the first dissenting voice can be extremely valuable. It just seems unlikely that this is the robosucka's case.
The traditional technology used for similar purposes in some cultures is alcohol. The idea is that as alcohol impairs thinking, it impairs the ability to lie convincingly even more. Especially considering that even if one drunk person lies successfully to another drunk person, the next day the other person can reflect on the parts they remember with a sober mind.
Thus, alcohol is an imperfect lie detector with a few harmful side effects; and in cultures where it is popular, groups of friends do this together, and conspicuously avoiding it will provide evidence against your sincerity.
If I were ever unsure whether I could trust the word of a friend on an important matter, I'd think that would represent deeper issues than a mere lack of information a scan of their brain could provide.
Friendships exist on a scale. If you switch from "a stranger" to "100% trusted person" too quickly, you probably have some unpleasant surprises waiting for you in the future. Also, friendship is not transitive, and sometimes you need to know whether you can trust a friend of a friend (even when your friend says "yes"). I know some people whom I trust, but I definitely do not trust their judgment about other people.
Here is another explanation, kind of:
Taylor expansion of 1/(1+x)^2 is 1 - 2x + 3x^2 - 4x^3 + 5x^4...
When x = 1, it means that 1 - 2 + 3 - 4 + 5... = 1/4
But 1 - 2 + 3 - 4 + 5... can be written as 1 + 2 + 3 + 4 + 5... - 2×2 - 2×4 - 2×6...
= 1 + 2 + 3 + 4 + 5... - 2×(2 + 4 + 6...)
= 1 + 2 + 3 + 4 + 5... - 2×2×(1 + 2 + 3...)
= (1 - 2×2) × (1 + 2 + 3...)
= -3 × (1 + 2 + 3...)
So if 1 - 2 + 3 - 4 + 5... = 1/4, we get:
1/4 = -3 × (1 + 2 + 3...)
-1/12 = 1 + 2 + 3...
(Found here.)
Yeah, I think it is okay to simplify things when someone puts an explicit disclaimer like "this is a simplification" or "this is not literally true, but it is an attempt to point in a certain direction".
But without such disclaimer, I will assume "once clickbait, always clickbait", especially when the priors on people being stupid on internet are so high.
If you want to experiment with alcohol, I would recommend trying it at home, with a trusted friend. Less social pressure, more pleasant environment, no need to solve the logistics of driving home, no problem if you e.g. start vomiting.
Decide in advance how much you want to try. Do not change your decision after you started drinking. For example, if you choose that today you want to try one glass of wine, do that, but if after drinking the first glass you decide that it was okay and you can try another... don't! (Ask your friend to help you keep your commitments, and evaluate their reliability based on if they actually do that.) The reason is that if you actually happen to be not okay, then you reasoning is untrustworthy. Sometimes, the more drunk people get, the more loudly they insist they are sober. (This is not a general rule, for example I am quite aware how drunk I am, but... I have seen other people do exactly this, and they just can't be convinced.)
Keep some records for your future self? Ask your friend to record you doing some tasks, such as walking along a straight line, juggling, singing, explaining a math problem, doing some introspection about how you feel. So that you can compare how you felt at the moment, vs how it seemed from outside. (Sometimes people feel very creative or smart when they are drunk, but to those around them they are not.)
Do not drink too late in the evening; give yourself some time to get sober before you go to bed, maybe at least three hours. I don't have a car, so I don't know how much time it takes after drinking to be able to drive; I think 24 hours should be safe.
Alcohol strength from low to high:
- American beer
- non-American beer
- wine
- distillates (vodka)
Your reaction to alcohol probably depends on the kind of alcohol, on its amount, on your genetics, and on your previous exposure (exposure increases tolerance, but it you become a heavy drinker, it might also decrease it). As an American with no previous exposure, perhaps start with the American beer.
(My guess would be that the amount that obviously does something noticeable to you, but doesn't result in anything bad, could be 1 bottle of beer, 2 deciliters of wine, or 1/2 deciliter of vodka.)
Many alcoholic drinks are basically a mix of a distillate, water, and flavor. So their strength depends on the amount of alcohol, which can range from very low to very high. You can't guess the strength by the taste, because the taste mostly depends on the non-alcoholic parts of the drink. For example a mix of vodka and the right kind/amount of fruit juice can result in a drink that tastes completely innocent and will knock you out before you even realize you were drinking something alcoholic.
Keep some water or other non-alcoholic drink at hand, so you won't drink more alcohol merely because you got thirsty (and too lazy/drunk to walk to the nearest water source).
It is generally recommended not to drink different types of alcohol at the same event. Not sure why, but it is one of those "it is known" things that most people follow. (My guess is that drinking different kinds of alcohol makes it more difficult to track intuitively how much you had? Like, a bottle of wine sounds like too much, but if you had a bottle of beer and a glass of wine and a little glass of vodka, then is still kinda sounds safe... or maybe it is not... and you do not have the mental capacity to figure it out at the moment.)
It takes maybe 5-30 minutes after drinking for the effect to appear at full strength. The effect is stronger on empty stomach; weaker if you eat e.g. bacon before drinking vodka. Physically demanding activity, such as dancing, helps metabolize the alcohol faster. (If you drink alcohol and do a physically demanding activity, remember to also drink enough water.)
Different people react differently to alcohol. Some get aggressive, others get cuddly; some feel full of energy, others feel sleepy; some forget what happened, others remember everything perfectly. Some get addicted, others don't, not sure what makes the difference. Alcohol addiction is really bad!
As usual with drugs, most people who volunteer the advice are the ones you should not listen to (yes, I am aware of the irony), because obviously the ones with most experience are the addicts, and you do not want to do the things they consider okay. Also, people are fucking hypocrites about the drugs they like vs don't like, mostly based on peer pressure; for example most rationalists consider drinking alcohol stupid and low-status... and then they overdose on some drug that happened to be popular in the Bay Area, because someone told them it was high-status and expanding their intellectual experience or whatever. I prefer alcohol, but I can also go for months without it, so I guess I am okay.
Analogies can be found in many places. FDA prevents you from selling certain kinds of food? Sounds similar to ancient priests declaring food taboos for their followers. Vaccination? That's just modern people performing a ritual to literally protect them from invisible threats. They even believe that a bad thing will happen to them if someone else in their neighborhood refuses to perform the ritual properly.
The difference is that we already have examples of food poisoning or people dying from a disease, but we do not have an example of a super-intelligent AI exterminating the humanity. That is a fair objection, but it is also clear why waiting to get the example first might be a wrong approach, so...
One possible approach is to look at smaller versions. What is a smaller version of "a super-intelligent AI exterminating the humanity"? If it is "a stupid program doing things its authors clearly did not intend", then every software developer has stories to tell.
This is not the full answer, of course, but I think that a reasonable debate should be more like this.
I agree that in long term, seller's market is the answer (and in the era of AGI, keeping it so will probably require some kind of UBI). But the market is not perfect, so the ban is useful to address those cases. Sometimes people are inflexible -- I have seen people tolerate more than they should, considering their market position they apparently were not aware/sure of. Transaction costs, imperfect information, etc.
Seems to me that debates about (de)regulation often conflate two different things, which probably are not clearly separated but exist on a continuum. One is that people are different. Another is cooperation vs defection in Prisoner's Dilemma (also known as sacrifice to Moloch).
From the "people are different" perspective, the theoretical ideal would be to let everyone do their own thing, unless the advantages of cooperation clearly outweigh the benefits of freedom.
From the "Moloch" perspective, it would be best for the players if defection was banned/punished.
As an example, should it be okay for an employee to have a sexual relation with their boss? From the "people are different" perspective, hey, if two people genuinely desire to have sex with each other, why should they be forbidden to do so, if they are both consenting adults? From the "Moloch" perspective, we have just added "provide sexual services to your boss and pretend that you like it" to the list of things that desperate poor people have to do in order to get a job.
And both these perspectives are legitimate, for different people in different situations, and it is easy to forget that the other situation exists (and to have this blind spot supported by your bubble).
Simply asking people about their genuine preferences is not enough, because of possible preference falsification. Imagine the person who desperately needs the job -- if you asked them whether they are genuinely okay with having sex with their boss, they might conclude that saying "no" means not getting the job. People could lie even if a specific job is not on the line, simply because taking a certain position sends various social signals, such as "I feel economically (in)secure".
But if we cannot reliably find out people's preferences, it is not possible to have a policy "it is OK only if it is really OK for you", and without an anonymous survey we can't even figure out which solution would be preferable for most people. (In near future, an AI will probably compile a list of your publicly stated opinions for HR before the job interview.) So we are left guessing.
Steelmanning is not the Ideological Turing Test.
ITT = simulating the other side
SM = can I extract something useful for myself from the other side's arguments
With ITT, my goal is to have a good model of the other side. That can be instrumentally useful to predict their behavior, or to win more debates against them (because they are less likely to surprise me). That is, ITT is socially motivated activity. If I knew that the other side will disappear tomorrow and no one will want to talk about them, ITT would be a waste of time.
With SM, my goal is to improve my model of the world. The other side is unimportant, except as a potential source of true information that may be currently in my blind spot. That is, SM is a selfishly motivated activity. Whether the other side approves of my steelman of them, is irrelevant; my activity is not aimed at them.
SM is trying to find a diamond in a heap of dung. ITT is learning to simulate someone who enjoys the dung.
Low confidence here, but the causality seems to me the other way round.
abuses the AIs -> rationalizes "they deserve it, because they are low status" -> notices that his status, although higher than the AIs' is still lower than his colleagues' -> feels that he also deserves abuse
I honestly don't understand the mindset at all.
The mindset is mostly a rationalization. The underlying motivation is that punishing others feels good, because it signals that you have higher status than them.
I like how EA is a "controversial philosophical and social movement that...", but e/acc is merely a "philosophical and social movement advocating for...".
I guess, for some editor, EA is an outgroup, while e/acc is a fargroup.
I wonder how much of this is the PR reasons, and how much something else... for example, the scalpers cooperating (and sharing a part of their profits) with the companies that sell tickets.
To put it simply, if I sell a ticket for $200, I need to pay a tax for the $200. But if I sell the same ticket for $100 and the scalper re-sells it for $200, then I only need to pay the tax for $100, which might be quite convenient if the scalper... also happens to be me? (More precisely, some of the $100 tickets are sold to genuine 3rd party scalpers, but most of them I sell to myself... but according to my tax reports, all of them were sold to the 3rd party.)
You could start your own blog on Substack, but then the problem will be how to find an audience. But posting a link somewhere in a comment section is easier than posting the entire article.
The international law is just a very thin layer on top of the "law of the jungle". Czechoslovakia also had all kinds of guarantees in 1938 and they also turned out not to be worth the paper they were written on.
I mean, what the hell did you expect? If you deploy an agent to increase Twitter engagement, it is presumably going to do things that increase Twitter engagement, not things that are broadly ‘good.’ An AI has the objective function you set, not the objective function full of caveats and details that lives in your head, or that you would come up with on reflection. The goal is the goal.
I think they instinctively expect the power to buy distance from the crime. Their instinct insists that it should be possible to hire a robot servant, tell him to increase Twitter engagement, and when anything bad happens as a side effect, all blame should go to the robot servant, and they can distance themselves and say that they definitely did not want this to happen (even if in reality there was no other way to increase Twitter engagement).
I tried it, but I disagree with some wording.
For example, "the risks Nonlinear took were not worth it" suggests that there is a linear scale from low risk to high risk, and the problem with Nonlinear was that they chose a wrong point on that scale, but from my perspective the problems were quite different. I can imagine an organization taking even more risks that I would be okay with. Basically, the difference is between taking a calculated risk, and just being careless.
you should aid their growth, rather than trying to control it, and know that teaching is much more than just communicating facts.
Models are more meta than isolated facts. Creating models is more meta than memorizing them. (More meta can be an overkill if you just need to do something once, but can be a great investment in long term.)
In pedagogy: associationism vs constructivism. More generally: teaching to the test vs genuine curiosity. The paradox is that teaching to the test is known to produce bad results (even measured by the tests), but understanding how things work sometimes allows you to pass even difficult tests almost magically (from the perspective of those who don't bother to understand).
political culture encourages us to think that generalized anxiety is equivalent to civic duty.
This is a wonderful way to put it!
As I see it, the "generalized anxiety" is essentially costly signaling. By being anxious, you signal "my reactions to the latest political thing are important for the fate of the world", i.e. that you are important.
But if this was the entire story, you would be able to notice it and feel ashamed that you felt for such simple trick. Reframing it as a civil duty keeps you in the trap, because it provides a high-status answer to why you are doing it, distracting you from realizing what you are doing.
(Metaphorically: "The proper way to signal high status is to keep pushing this button all the time." "I keep pushing the button and I feel great about it, but also my finger is starting to hurt a lot." "Don't worry; the fact that your finger hurts proves that you are a good and wise person." "Wow, now I keep pushing the button and my finger hurts like hell, and I feel great about it!")
When someone cuts you off in traffic, you might get angry, but their behavior, their cutting-off-other-people-in-traffic personality guarantees that you will never be stuck behind them, unable to go faster.
Unless he is the kind of person who cuts you off and then slows down, satisfied that he is now at the front of the line. (I am using "he" on purpose here, because it will almost certainly be a guy.)
My friend ended up in hospital in a similar way. He kept a safe distance from a vehicle in front of him. The guy behind him got impatient and pushed himself narrowly between them... and then had to hit the breaks because the first car unexpectedly slowed down for some reason... and my friend crashed into him (the weather was bad, the road slippery, he was trying to keep the safe distance for a very good reason).
Generalized life lesson: Some people actually care about their position relative to you. Avoid them, if you can.
So, someone volunteered to test the safety of the vaccine, in an extreme way. Thank you for advancing science!
Thank you, this is great!
I like the explanation that "guess culture" is not the same thing as "plausible deniability" (although it might seem so from the perspective of someone used to "ask culture"), because people used to "guess culture" actually can read each other's language quite clearly.
I also like the observation that if neither side wants to be a leader (if A+B < 100%), then the conversation may arms-race towards "guess culture"... and ultimately hit the limit of incomprehensibility.
Taking these two together, I guess it means that there are two different things that may both superficially seem like "guess culture":
- people who are native to the "guess culture", expressing all their preferences using subtle words
- people who navigated to "guess culture" territory as a consequence of not having strong preferences
The difference is that the latter will naturally revert to their native "ask culture" when someone starts having strong preferences again. The former will keep expressing their preferences mildly, and will get frustrated when you don't get it.
And there is a possibility of misunderstanding about the rules of the game, when one person is a native to the "guess culture" and the other just doesn't care, for now. They may seem to communicate well, until they suddenly don't (because the other person finds a preference for something).
Also, I would expect the "ask culture" to be more convenient for people on the autistic spectrum. I guess the normies should in theory be good playing either game, but perhaps "guess culture" reduces conflict, because there is more space to escalate before things get ugly?
I agree with practically everything you wrote, but I think you chose a wrong website to publish this.
Not sure if I should explain -- I see that your account is new, but does it mean that you discovered this website now, or were you a passive reader for a long time and only registered an account to post this? Basically, it should be obvious that we don't do traditional political debates here.
Your style does not fit the audience. Generally, rhetorical techniques that are successful in other places, are often a weakness here. To give you an example:
If one country can use military force to cross borders unchallenged, what prevents others from doing the same?
I understand the spirit of the statement, but from the technical perspective, the simple answer is that what prevents most countries from doing the same is their lack of nukes.
One important thing I learned when following the links was that some people learn about transsexuality before they learn about autism. (Probably many young people these days.) Which can have a big impact on how they evaluate their experience, because there are many things that could plausibly be interpreted as an evidence in either direction, such as "I do not have preferences or behavior typical for my gender".
For example, there was a moment when I learned about autism, and it felt like it explained a few things: "oh, I don't like drinking beer and watching football because I am an autistic nerd". No more explanation needed.
I imagine that in a parallel reality where I somehow never heard about autism, but everyone talks about transsexuality all the time, it would make sense to take all divergence from stereotypical masculinity as evidence of being trans. (And fail to notice that I do not have stereotypically female preferences and behavior either? Well, maybe I'm in a denial. Or maybe taking the right hormones is going to fix all of that.) This probably still wouldn't be enough to convince me that I am trans, but I would be quite confused.
It feels self-contained, which from my perspective is a good thing. The links are there, but they seem optional to understanding the main idea. (Though I may be biased, because I have already read most of them.)
Then we need a larger city and another English spelling reform to unleash the second industrial revolution!
More seriously, it seems to me that the problem of "not forgetting intellectual work" is mostly solved, but two more problems remain: (1) remove all the bullshit that threatens to drown the intellectual work by sheer persistence and quantity; and (2) make the intellectual work more accessible, reduce the unnecessary inconveniences of education and academia.
Christianity has a paradox in its heart, that an all-knowing and all-capable God created everything (directly or indirectly), and yet He is responsible only for the good parts of the creation.
The standard excuse is that the possibility to ruin everything was a necessary cost of our freedom, which doesn't make much sense, because (1) an all-knowing God could predict which humans would sin and which would not, and could create only the ones who would not sin, and (2) somehow it does not oppose the divine plan that human freedom is limited by thousand other things anyway, such as other humans, sickness, mortality, limited resources.
Trying to use this incoherent response as a lesson how we should shape the future... I guess we should give the future humans a random number generator, and tell them that for everything good that happens, they should thank us, and for everything bad that happens, they should blame the random number generator?
And perhaps, from a religious perspective, this even makes some sense, because we are keeping the door open for God to intervene by influencing the random number generator? The ultimate sin would be to make all the choices ourselves and not give the God an opportunity to intervene with plausible deniability?
Less charitably, the thing Lewis is optimizing for is not creating the best possible future, but avoiding blame.
This seems like a great idea -- I haven't tried the GPTs yet, so I will just comment on the rest of the article.
In addition to the AI tutoring the student 1:1, I wonder whether it would be useful to match students with each other based on their current interest and skill level. For humans, it is often motivating to talk to other humans, and the problem is that kids in the same classroom are often not interested in the same topic, or they know much less than you, or they know much more than you so it is boring for them to talk to you. But if this system was used by many kids across the country, there would be a chance to find someone at the same level as you, who is trying to learn the same thing as you, so perhaps the AI could put you in the same temporary chat room and let you talk to each other. Basically create temporary classrooms.
I have mixed opinions on using the workplace as an opportunity to learn, or even to determine what is worth learning. On one hand, yes, the job can help you find your blind spots, or the blind spots of your school; and then it would be great to get one more opportunity to learn all of that at school. On the other hand, there are also companies that use obsolete technologies, or use technologies the wrong way; you can meet overconfident colleagues who teach you various "anti-patterns" as their idea of best practices, and there is no one there to provide a balancing perspective... so I imagine some jobs could actually hurt your education a lot. Also, school education is usually more general: it teaches you the general principles of programming rather than the technical details of how to use FooLibrary-2.4.3, and it is the latter kind of knowledge that gets obsolete sooner.
I think you could apply rationality even in a universe with a random number generator, as long as most things are causal.
Even if the universe is causal, we still need a good strategy to think about it (i.e. rationality).
Curiosity killed the cat by exposing it to various "black swan" risks.
Does this actually have some point, even as a wrong metaphor, or is it just a mathematically looking word salad? I am too tired to figure this out.
I will just note that if this worked, it would be an argument for the impossibility of alignment of anything, since the "anthropocentic" part does not play any role in the proof. So even if all we had in the universe were two paperclip maximizers, it would be impossible to create an AI aligned to them both... or something like that.
We need a research on whether atheists are more likely to suffer from akrasia.
If we take Julian Jaynes seriously, the human brain has a rational hemisphere and a motivating hemisphere. Religion connects these hemispheres, allowing them to work in synergy. Skepticism seems to split them.
Effective atheists are probably the ones who despite being atheists still believe in some kind of "higher power", such as fate or destiny or the spirit of history or some bullshit like that. Probably still activates the motivating hemisphere to some degree, only now instead of hearing a clear voice, only some nonverbal guidance is provided. Deep atheism probably silences the motivating hemisphere completely.
The question is, how to harness the power of the religious hemisphere without being religious (or believing some nominally non-religious bullshit). How to be fully rational and fully motivated at the same time.
Can we say something like "I know this is pure bullshit, but God please give me the power to accomplish my goals and smite my enemies!" and actually mean it? Is this what will unleash the true era of rationalist world optimization?
It’s not that consumers ask for one thing and get another, it’s that they get what they want but we think what they want is bad for them.
I think it makes sense to distinguish three situations:
- the consumer wants X, the company sells Y in a bottle labeled "X"
- the company sells X, telling everyone using advertising and bribed experts: "science proves that X makes you healthy, and lack of X makes you sick", but that is a lie
- the company sells X, everyone knows that X is bad for you, but the customers buy it anyway
The first is clearly a problem, most people would agree with that. The last probably cannot be avoided -- if you don't allow the customers to buy the product legally, they will buy it illegally -- plus there is a chance that the government is wrong.
It is the second case that bothers me. I don't think it is completely fair to say "customers want it", even if they kinda do, because they only want it because they are lied to. I wouldn't want the government to stop me from getting what I want, but I would want to be told clearly when someone is lying to me. (And yes, there is also a risk that the government would be wrong. But I don't think that it is a good solution to let the lies unaddressed, or to let various people -- scientists and scammers alike -- say different things and expect the average person to sort it out without any more hints.)
So, I would like to see some sort of "scientific authority" that would have a monopoly on providing official medical recommendations, which would be clearly displayed on health-related products, or their absence would be obvious for everyone. Something like, each actual medicine contains a red rectangle with a logo saying "this is actual medicine", and no one is allowed to put anything similar on their product, unless FDA allows them. You are allowed to buy and sell stuff without the red rectangles, but everyone is told, repeatedly and unambiguously by media: "if it claims to have medical benefits, but it doesn't have the red rectangle, it's fraud -- always check the red rectangle". (The test criterion for "repeatedly and unambiguously" is that an average person with 80 IQ can tell you what the red rectangle means.)
I am not blaming you personally, but the Overton window contains the population growth and not much else.
market size, better matching, more niches
Improving the population (genetically or by education) would have some effect here, too. Not literally more niches or bigger market size overall, but more niches for smart-people-related things, and more market demand for the stuff smart people buy.
you should understand how the foundations of math work before doing advanced math
Is this merely something that set theoreticians believe, or do mathematicians that are experts at other branches of math actually find set theory useful for their work?
Can you in practice use set theory to discover something new in other branches or math, or does it merely provide a different (and less convenient) way to express things that were already discovered otherwise?
Many statements are undecidable in ZFC; what impact does that have on using set theory as a foundation for other branches of math?
Yes, I think the reasonable objection is that "population growth" is only a one way to achieve the (selfishly) desired outcome, and that it would be bad to focus on it at the exclusion of everything else.
For example, you could also get more research by increasing average human IQ, whether by genetic engineering or some form of eugenics. (The eugenics doesn't have to be coercive, we haven't picked even the lowest hanging fruit of encouraging healthy young men with high IQ to donate more sperms.)
The existing smart humans probably also could be used much better. Education sucks; special education for gifted kids is a taboo at many places. Scientists waste a lot of time doing paperwork. Scientific articles are paywalled. Many people do bullshit jobs, because those pay well and sometimes you don't have the skills necessary to start your own company. (Or maybe we could just open borders for people with IQ over 150.)
Basically, seeing all this inefficiency makes "we need to increase the population" sound like motivated reasoning.
*
That all said, maybe it is a sad truth that all these things are politically difficult to fix, and the population growth is after all the most likely way to actually get more research done.
Sorry for making this personal -- I had only 3 examples in mind, couldn't leave one out.
Would you agree with the statement that your meta-level articles are more karma-successful than your object-level articles?
Because if that is a fair description, I see it as a huge problem. (Not exactly as "you doing the wrong thing" but rather "the voting algorithm of LW users providing you a weird incentive landscape".) Because the object level is where the ball is! The meta level is ultimately there only to make us more efficient at the object level by indirect means. If you succeed at the meta level, then you should also succeed at the object level, otherwise what exactly was the point?
(Yours is a different situation from Roko's, who got lots of karma for an object-level article, and then wrote a few negative-karma comments, which was what triggered the censorship engine.)
The thing I am wondering about is basically this: If you write an article, saying effectively "Yudkowsky is silly for denying X", and you get hundreds of upvotes, what would happen if you consequently abandoned the meta level entirely, and just wrote an article saying directly "X". Would it also get hundreds of upvotes? What is your guess?
Because if it is the case that the article saying "X" would also get hundreds of upvotes, then my annoyance is with you. Why don't you write the damned article and bask in the warmth of rationalist social approval? Sounds like win/win to everyone concerned (perhaps except for Yudkowsky, but I doubt that he is happy about the meta articles either, so this still doesn't make it worse for him, I guess). Then the situation gets resolved and we all can move on to something else.
On the other hand, if it is the case that the article saying "X" would not get so many upvotes, then my annoyance is with the voters. I mean, what is the meaning of blaming someone for not supporting X, if you do not support X yourself? Then, I suspect the actual algorithm behind the votes was something like "ooh, this is so edgy, and I identify as edgy, have my upvote brother" without actually having a specific opinion on X. Contrarianism for contrarianism's sake.
(My guess is that the article saying "X" would indeed get much less karma, and that you are aware of that, which is why you didn't write it. If that is right, I blame the voters for pouring gasoline into fire, supporting you to fight for something they don't themselves believe in, just because watching you fight is fun.)
Of course, as is usual when psychologising, this all is merely my guess and can be horribly wrong.
The problem with being excellent at many things is that life is short.
We can overcome that by cooperation of people with different talents.
The problem with cooperation of people with different talents is that they may have a difficulty to understand each other. They may also be not aligned -- mindset "I don't understand X at all, but if my colleague tells me that it is important, I will do the things he deems necessary, even if I do not clearly see the benefit, and it complicates my part of the job" is not guaranteed. Or perhaps we might say that the ability to cooperate also takes some points?
*
For example, to figure out how to best teach mathematics, you need someone who is great at teaching and great at mathematics. One of these things alone will not suffice. The math professor will probably recommend something difficult that most kids will fail at. The educational expert will probably invent some kind of new math that removes large parts of what everyone else understands as math. You need to find the rare person who excels at both.
Is then the problem solved? Actually, probably not, unless the person is also an expert at writing textbooks.
Okay, but is then the problem solved? Haha, not really, unless you are also good at marketing or politics. Good solutions matter only when people know about them. Otherwise you will sell 200 copies of the textbook and then go out of business.
(And the problem with alignment is that even if you find someone who is the world's best expert at selling textbooks... why would they promote specifically your textbook? I mean, if they only do it for money, they can probably make just as much or more money by selling someone else's textbook; or even better, selling horoscopes.)
Yes, but such humans are very rare. Can you provide a second example of comparable quality?
Can you make the sign arbitrarily small?
I think "extinction" is the proper word, because even an AI that wants to maximize suffering would probably at some moment replace humans with some engineered species that can feel more suffering per resources spent.
Yea, that's what I tried to say. If you want to have a debate better than 4chan, but also feel bad whenever someone accuses you of censorship, you need to think about it and find a solution you would be satisfied with (while accepting that it may be imperfect), considering both sides of the risk.
Disabling the voting system or giving someone dozen "balancing" upvotes whenever they accuse you of censorship / manipulation / hive mind, that only incentivizes people to keep accusing you of censorship / manipulation / hive mind. And maybe I am overreacting, but I think I already see a pattern:
- Zack cannot convince us of his opinions on the object level, so he instead keeps writing about how the rationalists are not sufficiently rational to accept his politically incorrect opinions (if you disagree with him, that only proves his point);
- Trevor keeps writing about how secret services are trying to manipulate the AI safety community, and how they like to use "clown attacks" i.e. manipulate people to associate the beliefs they want to suppress with low status (if you tell him this is probably crazy, that only proves his point);
- now Roko joined the group by writing a few comments that got downvoted (possibly rightfully), and then complaining that if you downvote him, you participate in the system of censorship (so if you downvote him, that only proves his point).
We have a long history of content critical of Less Wrong getting highly upvoted on Less Wrong. Which alone is a good thing -- if that criticism makes sense, and if the readers understand the paradoxes involved (such as: more tolerant groups will often get accused of intolerance more frequently, simply because they do not suppress such speech). Famously, Holden Karnofsky's criticism of Singularity Institute (previous name of Yudkowsky's organization) was among the top upvoted articles in 2012. And that was a good thing, because it allowed a honest and friendly debate between both sides.
But recently it seems to me that this is devolving into people upvoting cheap criticism, which seems optimized to exploit this pattern. Instead of writing a well-reasoned article whose central idea disagrees with the current LW consensus and letting the readers appreciate the nuances of the fact that such article was posted on LW, the posts are lower-effort and directly include some form of "if you disagree with me, that only proves my point". And... it works.
I would like to see less of this.
You think he faked the screenshot of being rate-limited?
No.
most people here don't want to participate in censorship, but are doing so accidentally
I think that many people simultaneously (1) have their preferences about what content they want to see more of, and what content they want to see less of, and (2) do not want to be accused of "censorship".
So they upvote the things they like and downvote the things they don't like, but if you accuse them of doing "censorship", it hurts their self-image, so they go and give a few upvotes to feel better about themselves. Not because they realize that they actually like the content they accidentally downvoted, but because they want to protect their self-image from an association with "censorship", even if it makes the website slightly less fun.
I typically use the karma button to express that I think the comment is generally good or generally bad, and the second button when I want to send a more nuanced signal -- for example, if I disagree with your opinion, but there is nothing wrong about the fact that you wrote it, that would be "×".
My opinion is that the "lazy" upvote/downvote system is useful, because the more costly you make it, instead of voting more carefully, most people will simply vote less.
Sounds reasonable -- with greater (voting) power comes greater responsibility.
(And there is always the option to only use the normal votes.)
Looking at your profile, I can only see a single comment which is below -5 karma in the past 2 weeks. Are they hidden, or did some people take a corrective measure after reading this post?
I am similarly confused. Either the downvotes are very rare, in which case I think we do not need to change the automated moderation system. Or they were frequent before this was posted, which means it is trivial to manipulate LW readers into giving you lots of karma -- you just have to accuse them of censorship.
This post would be way more convincing for me with specific examples of comments that were downvoted but shouldn't be. We could agree or disagree, because there would be something specific to agree or disagree about. As it is now, it is basically just begging for more karma or having the karma restrictions lifted.
Is it not possible that, inside that randomly-seeming universe, an infinitude of macroscopic patterns exist (analogous to us), complex enough to track the similar macroscopic patterns they care about (analogous to our pressure), but written in a partitional language we don't discern?
Is homomorphic encryption an example of the thing you are talking about?
This could be easier done using a "spellchecker" that would underline the negative forms.
Meanwhile, I'd prefer to continue cooperating with my (small) Russian social circle.
What kind of cooperation do you mean?
- meeting each other in person -- yeah, this will be limited to your occasional visits back home
- talking online -- if you move too far, the time zones may complicate this
- business plans -- may not be possible because of sanctions; on the other hand, if the situation improves, having a foreign business partner could become an advantage
I'm unsure they have my better interests in mind
I don't know anything about them or their advice to you, so it's hard to say. I think the prior probability is low. People usually do what (they believe) is best for them, not for other people. Even if they want to do what is best for you, they may have a wrong idea about who you are and what you actually want. Also, people change, so even if today they have the right kind of plan, tomorrow they may change their minds.
Is there anything constructive I can do if I stay? Or is leaving the country and minimizing contacts the only ethical choice?
Unless you have a comparative advantage at organizing revolutions, I am not sure what you could achieve by staying. Leaving the country seems reasonable. I don't see the reason for minimizing contacts, though.
And generally, if anyone recommends you to break contacts with your friends, that is a huge red flag. Unless they are all literally criminals or drug users or something like that, but I doubt this is the case.
I wonder if every logical fallacy has a converse fallacy, and whether it would be useful to compose a list of fallacies arranged in pairs. Perhaps it would help us discover new ones, as missing pairs to something.
For example, some fallacies consist of taking a heuristic too seriously. Experts are often right about things, but an "argument by authority" assumes that this is true in 100% of situations. Similarly, wisdom of crowds, and an "argument by popularity". The converse fallacy would be ignoring the heuristic completely, even in situations where it makes sense. The opposite of argument by authority is listening to crackpots and taking them seriously. The opposite of argument by popularity is doing things that everyone avoids (usually to find out they were avoiding it for a good reason).
There is a specific example I have in mind, not sure if it has a name. Imagine that you are talking about quantum physics, and someone interrupts you by saying that people who do "quantum healing" are all charlatans. You object that you were not talking about those, but about actual physicists who do actual quantum physics. Then the person accuses you of doing the "No True Scottsman" fallacy -- because from their perspective, everyone they know who uses the word "quantum" is a charlatan, and you are just dismissing this lifelong experience entirely, and insisting that no matter how many quantum charlatans are out there, they don't matter, because certainly there is someone somewhere who does the "quantum" things scientifically. How many quantum healers do you have to observe until you can finally admit that the entire "quantum" thing is debunked?