Posts
Comments
imagine you're in a fistfight with a hungry tiger. Do you want it to be a fair fight, or would you like to try and cheat somehow?
They are irrational. ... Structurally, we’re talking about a cartel, or a mob. Mob in both the “mafia” and the “riot” sense. Collusion to keep unmerited privilege
It is not irrational to want to hang on to power/privilege. A very basic property of a rational agent is to want to increase or at least not decrease its level of capability in the world. Fair competition, when you would lose, is irrational. In fact the very notion of "fair" is hard to define in a general way, but winning is less hard to define.
Similarly, it is not irrational to want to form a cartel or political ingroup. Quite the opposite. It's like the concept of economic moat, but for humans.
all the millions of mental motions involved in trying to understand things accurately. The person who is wrong on purpose wants to just stop all of that motion, forever.
The most important thing for an agent to understand is to preserve itself and its utility function. Perhaps Satre's antisemites, though lacking in IQ, understood this better than we do.
I don't think it will be very difficult to impart your intentions into a sufficiently advanced machine
Counterargument: it will be easy to impart an approximate version of your intentions, but hard to control the evolution of those values as you crank up the power. E.g. evolution, humans, make us want sex, we invent condoms.
No-one will really care about this until it's way too late and we're all locked up in nice padded cells and drugged up, or something equally bad but hard for me to imagine right now.
I think 50% is a reasonable belief given the very limited grasp of the problem we have.
Most of the weight on success comes from FAI being quite easy, and all of the many worries expressed on this site not being realistic. Some of the weight for success comes from a concerted effort to solve hard problems.
I guess there is a gap between the OP's intention and his/her behaviour? Intended to link to something but actually just self-links?
Thanks for your comment! Can you say which country?
Could you tell me how you came about the list of African backward values?
Not in particular, the human brain tends to collect overall impressions rather than keep track of sources.
I'd like the names of all the values I'd need to instil to avoid seeing preventable suffering around me.
This sounds like a seriously tough battle.
Yeah, I mean maybe just make them float to the bottom?
One problem here is that we are trying to optimize a thing that is broken on an extremely fundamental level.
Rationality, transhumanism, hardcore nerdery in general attracts a lot of extremely socially dysfunctional human beings. They also tend to skew towards a ridiculously biologically-male-heavy gender distribution.
Sometimes life throws unfair challenges at you; the challenge here is that ability and interest in rationality correlates negatively with being a well-rounded human.
We should search very hard for extreme out-of-the-box solutions to this problem.
One positive lead I have been given is that the anti-aging/life-extension community is a lot more gender balanced. Maybe LW should try to embrace that. It's not a solution, but that's the kind of thing I'm thinking of.
agree with that isn't just “+1 nice post.” Here are some strategies...
How about the strategy of writing "+1 nice post"? Maybe we're failing to see the really blatantly obvious solution here....
+1 nice post btw
someone was accidentally impregnated and then decided not to abort the child, going against what had previously been agreed upon, and proceeded to shamelessly solicit donations from the rationalist community to support her child
They were just doing their part against dysgenics and should be commended.
word is going around that Anna Salamon and Nate Soares are engaging in bizarre conspiratorial planning around some unsubstantiated belief that the world will end in ten years
Sounds interesting, I'd like to hear more about this.
My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.
This also applies to me
I think that there are a multipandemic of computer viruses, but most of them now are malware which is not destroying data, and they are in balance with antivirus systems.
Well............ I don't know about this. If it's "in balance" and not actually destroying the hosts then it's not really a pandemic in the sense that you were using above. (Where it kills 99.999% of hosts!)
But then why have we not seen a multipandemic of computer viruses?
Mostly (I assert) because the existence of an epidemic of virus A doesn't (on net) help virus B to spread.
Parasites which parasitize the same host tend to be in competition with each other (in fact as far as I am aware sophisticated malware today even contains antivirus code to clean out other infections); this is especially true if the parasites kill hosts.
I think a multipandemic is an interesting idea, though, and worthy of further investigation 👍
AFAIK Anthrax is not human transmissible. See: https://en.wikipedia.org/wiki/Anthrax
In result there will be multipandemic with mortality 1- (0.5 power 100) = 0,99999
I don't think that's what would actually happen. Most likely, there would be a distribution over transmission rates. Some of your pathogens would be more infectious then others. The most infectious one or two of them would quickly outpace the transmission of all the others. It would be extremely hard to balance them so that they all had the same transmission rate.
The slower ones could be stranded by the deaths and precautions caused by the faster ones.
That world is called the planet Vulcan.
Meanwhile, on earth, we are subject to common knowledge/signalling issues...
It has been fairly standard LW wisdom for a long time that any kind of human augmentation is unhelpful for friendliness.
I think that we should be much less confident about this, and I welcome alternative efforts such as the neural lace.
I'm not 100% sure what the incentives for such people are, but it is a very small company.
Actually yesterday this came to bite them and we now have a serious problem because my "fix this underlying system" advice was rejected.
We had this problem at work quite a few times. Bosses are reluctant to let me do something which will make things run more smoothly, they want new features instead.
The when things break they're like "What! Why is it broken again?!"
no options for encryption on the community
I've heard the CIA, the FBI and the Illuminati are all onto us. Strong encryption is not negotiable.
Why not go for something based on the matrix protocol
Maybe not everyone is ready to take the red pill?
So if we can't downvote into oblivion, how do we get rid of shitposting in this place?
What is the algorithm that currently determines the placement of discussion articles? Oh it defaults to "new". Hmm. ok.
Then when you click "Top Scoring", it defaults to "All time".
When you manually select something more sensible like this week or this month you can't see this post, and you see some interesting articles.
Maybe the problem is not that this post hasn't been downvoted enough, but that we are not setting sensible defaults? Maybe we need to make the default some kind of semi-random selection which trades off quality against newness?
Call this kind of reasoning the semiotic fallacy: Thinking about the semiotics of possible actions without estimating the consequences of the semiotics.
But you could equally well write a post on the "anti-semiotic fallacy" where you only think about the immediate and obvious consequences of an action, and not about the signals it sends.
I think that rationalists are much more susceptible to the anti-semiotic fallacy in our personal lives. And also to an extent when thinking about global or local politics and economics.
For example, I suspect that I suffered a lot of bullying at school for exactly the reason given in this post: being keen to avoid conflict in early encounters at a school (among other factors).
I don't believe for one moment that using a Balrog analogy actually makes people understand the argument when they otherwise wouldn't.
I disagree, I think there is value in analogies when used carefully.
It is a fallacy to think of AI risk as like Balrogs because someone has written a plausible-sounding story comparing it to Balrogs.
Yes, I also agree with this; you have to be careful of implicitly using fiction as evidence.
I think this is more useful as a piece that fleshes out the arguments; a philosophical dialogue.
I have a higher probability of a group of very dedicated wizards succeeding, worth re-doing the above decision analysis with those assumptions
Then there is still a problem with how much time we leave for the wizards, which mithril mining approaches we should pursue (risky vs safe)
Yes, definitely. The more you are in such a community, the more you can do this.
convince or be convinced
Isn't this kind of like the Aumann agreement theorem?
Are there any humans who meet that lofty standard?
It seems kind of common sense that a small group of people using violence against a very large, well-armed group are going to have a tough time.
It is definitely true that progress towards AGI is being made, if we count the indirect progress of more money being thrown at the problem, and importantly perceptual challenges being solved means that there is now going to be a greater ROI for symbolic AI progress.
A world with lots of stuff that is just waiting for AGI-tech to be plugged into it is a world where more people will try hard to make that AGI-tech. Examples of 'stuff' would include robots, drones, smart cars, better compute hardward, corporate interest in the problem/money, highly refined perceptual algorithms that are fast and easy to use, lots of datasets, things like deepmind's universe, etc.
A lot of stuff that was created from 1960 to 1990 helped to create the conditions for machine learning; the internet, Moore's law, databases, operating system, open source software, a computer science education system etc.
Upvoted, and I encourage others to upvote for visibility.
I might wonder if there are things humans can do with concepts and symbols and principles, the traditional tools of the “higher intellect”, the skills that show up on highly g-loaded tasks, that deep learning cannot do with current algorithms. ... So far, I think there is no empirical evidence from the world of deep learning to indicate that today’s deep learning algorithms are headed for general AI in the near future.
I strongly agree, and I think people at Deepmind already get this because they are working on differentiable neural computers.
Another key point here is that hardware gains such as GPUs and Moore's law increase the returns to investing time and effort into software and research.
And 80,000 hours is advertising that they aim to help everyone, but then they are funding an organisation that is explicitly aiming to favor certain groups. As I have already said, males are disproportionately incarcerated by a very large margin, and any realistic decrease in incarceration will therefore help males, but that fact is not being trumpeted. It's the color label that is getting extra special attention here and being promoted from a side effect of doing something else good to a goal in its own right.
IMO this is not a good thing to fund.
Well, I am probably overstepping if I claim to know for certain that Prop 47 was a mistake. 80,000 hours is advertising that they will maintain public safety with their efforts in this area, but the consensus is that Prop 47 has done the exact opposite.
The focus on 'people of color' you picked up on is thus not necessarily indicative of a damaging bias here
But let's suppose that the most effective intervention in this field resulted in increasing the racial disparity in incarceration. Would ASJ pursue it? Can we take their outward focus on race as evidence that race-favoritism is a goal that they internally pursue, perhaps over and above the high-level goal that 80,000 hours advertises them under?
Does their focus on race bias them about where the tradeoff between incarceration and safety should be struck? For example,
and what is Prop 47?
and also:
The tradeoffs here are at least somewhat controversial.
it seems quite intuitive that any effective approach to reducing mass incarceration in the U.S. will have its biggest impact in 'communities of color'
It is very hard for me to respond to this without breaking my own rules; "this post is not intended to start an object-level discussion about which race, gender, political movement or sexual orientation is cooler", but let me try.
First: 'people of color' is simply a Social Justice term meaning "not white", and explicitly includes (far east) Asian Americans. Without implying here any form of superiority, it is a fact that the incarceration rates for Asian Americans most certainly do not put them into the same broad category as other "people of color".
So in this context, the term "people of color" is not a category that carves reality at its joints. A martian xenosociologist would not find the category "all people who are not white European" useful for trying to maximise the objective of "substantially reducing incarceration while maintaining public safety", when compared to the more natural categories of actual races. Uncharitably, one could explain the non-carving-at-joints term "people of color" as a brazen attempt to rope Asian Americans and other "Model minorities" into a political coalition that actively harms them.
Second: the stated goal of 80,000 hours here is not to reduce incarceration. It is to reduce incarceration while maintaining public safety. The mere fact that more "people of color" (sorry, Asian Americans!) are incarcerated than white European people is not enough to get to the the claim that you are making - "biggest impact in 'communities of color' ".
And then there is the further claim by the ASJ that we should "reduce racial disparities in incarceration". That's an additional jump from "having the biggest impact in communities of color", because it implies that you could keep the same level of incarceration in communities of color, but incarcerate more white people. That would technically reduce the disparity. Are they trying to invent affirmative action for courts/prisons?
Go back to our martian alien who knows nothing of SJWs. He starts trying to come up with a plan to reduce incarceration whilst maintaining public safety, he looks at the well-established facts about differential incarceration rates. Then maybe he communicates with the earthling ChristianKl who has just started having potentially useful ideas about "rewarding prisons financially for low recidivism rates". What does the alien, who is apolitical and doesn't know to avoid the taboos of the culture war think about next? He might look at redictivism rates by race?
At this point, the alien would perhaps start to question whether the goal of "reducing incarceration while maintaining public safety" was really an accurate specification of what humans wanted. Maybe what they want is some combination of
- less incarceration overall
- more safety for the law-abiding public
- a justice system which exhibits equality of outcomes when that would benefit groups that are high status within the SJ movement (e.g. African Americans), and equality of process when equality of outcomes would be to the detriment of groups that are high status within the SJ movement (e.g. women)
This combination of goals is good at explaining the words that are being emitted by the ASJ. It explains the focus on people of color as well as the total lack of any mention of the fact that males are vastly over-represented in prisons, and the conspicuous absence of efforts to reduce the gender disparity in prisons.
Now you might say, "wow, you have really broken your own rules there!" - well, let me disclaim that I am not implying any form of moral superiority between culture-war salient groups here. There are certainly many people of color who have suffered injustice at the hands of a highly imperfect and unfair, sometimes racist, system.
I am simply pointing out that if you casually assert the "intuitive" equivalence of statements that are not equivalent in all possible worlds, then you are taking some pretty big risks regarding good epistemology.
I would like to see more crossposted from Intelligent Agent Foundations Forum.
I think linkposts + drafts is broken and weird. You have to go to the dropdown and select "post to LW dsicussion" immediately. If you post to drafts once, it can do some odd things.
I have made some edits to this post emphasizing some things that occurred to me after finishing it.
Well there are definitely a lot of good things about the EA movement, and people who choose to be a part of it should be proud of its achievements.
Very interesting, though I would do a linkpost for a specific topic so that people can discuss just that. The stuff about slavery sounds really intersting
it's obvious by his tone and wording that he has other motives, and that these are obvious enough to enough people that I expect this to cause many people's guts to notice these motives and politically infect
How would you change this post to convey the same objections and points, but be less "infectious"?
I am prepared to take criticism into account and reword the article before it goes to the EA forum.
if your descriptions of their recommended organizations are charitable, then I too am confused right now.
Please check the links and report back, I am one person working alone so it is possible I have missed something important.
frame your criticism as a confusion.
Well I have been accused of being a concern troll in the past for doing exactly that. So, I am being up-front: this is a critical article with that caveat that criticism of professional altruists is a necessary evil.
How to addict users to little squirts of dopamine is big business. The problem, of course, is the kind of crowd you end up attracting. If you offer gold stars, you end up with people who like gold stars.
Everyone likes gold stars, but not everyone likes decision theory, rationality, philosphy, AI, etc. Even if we were as good as farmville at dopamine, the farmville people wouldn't come here instead of farmville, because they'd never have anything non-terrible to say.
Now we might start attracting more 13 year-old nerds... but do we want to be so elite that 13 year old nerds can't come here to learn rationality? The ultimate eliteness is just an empty page that no-one ever sullies with a potentially imperfect speck of pixels. I think we are waaaay too close to that form of eliteness.
Yes, there is a point at which more upvoting starts to saturate the set of possible scores for a comment, but we are nowhere near that point IMO. And if we were, I think it would be much better to add a limited-supply super-upvote to the system.
I think that at some point adding more "free gold stars", i.e. upvotes, badges etc to people would look silly and be counterproductive, but we are nowhere near that point, so we should push the gas pedal, aim to upvote every non-terrible post at least somewhat, upvote decent posts a lot and create new levels of reward - something like lesswrong gold - for posts that are truly great.
We should limit downvotes substantially, or perhaps permanently remove the downvote and replace with separate buttons for "I disagree but this person is engaging in (broadly) rational debate" and "This is toxic/spammy/unrepentantly dumb".
These buttons should have different semantics, for example "I disagree but this person is engaging in (broadly) rational debate" might be easy to click but not actually make the post go down the page. The "This is toxic/spammy/unrepentantly dumb" might be more costly to click, for example have a limited budget per week and cause the user to have to click an additional dialogue, perhaps with a mandatory "reason" field which is enforced, but would actually push the post down the way downvotes currently do, or perhaps even more strongly than downvotes currently work.
trade-off between attracting more people and becoming more popular vs maintaining certain exclusivity and avoiding an Eternal September,
I do not think that this is the tradeoff that we are actually facing. I think that in order for the site to be high quality, it needs to attract more people. Right now, in my opinion, the site is both empty and low quality, and these factors currently reinforce each other.
OK forget the phrase pissed off - what I am trying to get at is deontology vs consequences
Well I would honestly start by doing a literature review of what the relevant academic fields have already studied.
If I had to guess on the spot what makes a government good, I woild caution that a lot of what one sees in outcomes in the short term is determined by economics. On top of that there are broader political processes that are just gping to happen.
Maybe one thing I feel fairly confident about is that starting expeditionary wars of aggression has a very bad track record.
What exactly do you mean by "should" here? Is it "should" as in the empirical claim "should = these actions will maximise the quality and number of users" or is it some kind of deontological claim like "should because u/Lumifer inherently believes that a mediocre post/comment should map to 0"?
I ask because it is plausible that
the optimal choice of mapping is not mediocre -> 0, where we judge optimality by the consequences for the site
you and others are inherently pissed off by people posting an average comment and getting +1 for it
It doesn't have to be that specific number or way of doing things - the general point is "do we mostly punish or mostly reward".