Posts
Comments
Successful attacks would buy more time though
I don't know! I've certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn't stop progress, but would slow it down - and if you think there is a chance alignment will be solved, the more time you buy the better.
I am bringing it up for calibration. As to whether it's the same magnitude of horrific: in some ways, it's higher magnitude, no? Even Nazis weren't going to cause human extinction - of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn't easily forgive a drunk driver who runs over a child...
Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence?
They're not the same. None of these are extinction events; if preventing the extinction of the human race doesn't legitimise violence, what does? (And if you say nothing, does that mean you don't believe in the enforcement of laws?)
Basically, I can't see a coherent argument against violence that's not predicated either on a God, or on humanity's quest for 'truth' or ideal ethics; and the latter is obviously cut short if humans go extinct, so it wouldn't ban violence to prevent this outcome.
The assassination of Archduke Ferdinand certainly coerced history, and it wasn't state-backed. So did that of Julius Ceasar, as would have Hitler's, had it been accomplished.
Well, it's clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI - in which case, every day they're not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made - or you think they're not making progress anyway, so why are you worried?
But AI doomers do think there is a high risk of extinction. I am not saying a call to violence is right: I am saying that not discussing it seems inconsistent with their worldview.
That's not true - we don't make decisions based on perfect knowledge. If you believe the probability of doom is 1, or even not 1 but incredibly high, then any actions that prevent it or slow it down are worth pursuing - it's a matter of expected value.
Except that violence doesn't have to stop the AI labs, it just has to slow them down: if you think that international agreements yada yada have a chance of success, and given this takes time, then things like cyber attacks that disrupt AI research can help, no?
If it's true AI labs aren't likely to be the cause of extinction, why is everyone upset at the arms race they've begun?
You can't have it both ways: either the progress these labs are making is scary - in which case anything that disrupts them (and hence slows them down even if it doesn't stop them) is good - or they're on the wrong track, in which case we're all fine.
Is all non-government-sanctioned violence horrific? Would you say that objectors and resistance fighters against Nazi regimes were horrific?
Here's my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don't sanction violence to prevent human extinction, when do you ever sanction it? (And I don't think it's stretching the definition to suggest that law requires violence).
But when you say extinction will be more likely, you must believe that the probability of extinction is not 1.
OK, so then AI doomers admit it's likely they're mistaken?
(Re side effects, no matter how negative they are, they're better than the alternative; and it doesn't even have to be likely that violence would work: if doomers really believe P(doom) is 1, then any action with a non-zero probability of success is worth pursuing.)
This is a pedantic comment. So the idea is you should obey the law even when the law is unjust?
Isn't the prevention of the human race one of those exceptions?
Er, yes. AI risk worriers think AI will cause human extinction . Unless they believe in God, surely all morality stems from humanity, so the extinction of the species must be the ultimate harm - and preventing it surely justifies violence (if it doesn't, then what does?)
Yes but what I'm saying is that this isn't true - few people are absolute pacifists. So violence in general isn't taboo - I doubt most people object to things like laws (which ultimately rely on the threat of violence).
So why is it that violence in this specific context is taboo?
So, you would have advocated against war with Nazi Germany?
To be fair, I'm not saying it's obviously wrong; I'm saying it's not obviously true, which is what many people seem to believe!
But that's not general intelligence; general intelligence requires considering a wider range of problems holistically, and drawing connections among them.
Not an explicit map; I'm raising the possibility that capability leads to malleable goals.
I don't see how this relates to the Orthogonality Thesis.
It relates to it because it's an explicit component of it, no? The point being that if there is only one way of general cognition to work, perhaps that way by default involves self-reflection, which brings us to the second point...
Do you believe that an agent which terminally values tiny molecular squiggles would "question its goals and motivations" and conclude that creating squiggles is somehow "unethical"?
Yes, that's what I'm suggesting; not saying it's definitely true; but it's not obviously wrong, either. Haven't read the sequence, but I'll try to find the time to do so - but basically I question the wording 'terminally values'. I think that perhaps general intelligence tends to avoid valuing anything terminally (what do we humans value terminally?)
I think reflective stability, as it is usually used on LW, means something more narrow than how you're interpreting it
Possibly, but I'm responding to its definition in the OT post I linked to, in which it's used to mean that agents will avoid making changes that may affect their dedication to their goals.
Of course they are wrong. Because if you examine everything at the meta-level, and forget about being pragmatic, you will starve.
I haven't posted the question there.
For the love of... problem solved = the problem I asked for people to help me solve. I.e. finding metrics. If you don't want to help, fine. But as I said, being inane in attempt to appear smart is just stupid, counterproductive and frankly annoying.
Look, someone asks for your help with something. There are two legitimate responses: a) you actually help them achieve their goal or b) you say, "sorry, not my problem". Your response is to be pedantic about the question itself. What good does that do?
My metrics are likely to be quite different from yours
And that's fine! If everyone here gave me a list of 5-10 metrics instead of pedantic responses, I'd be able to choose a few I like, and boom, problem solved.
The job was, evaluate a presidency. What metrics would you, as an intelligent person, use to evaluate a presidency. How much simpler can I make it? I didn't ask you to read my mind or anything like that.
It's easy to generate tons of metrics, what's hard is generating a relatively small list that does the job. If you are too lazy to contribute to the discussion, fine. But contributing just pedantic remarks is a waste of everyone's time.
My parents always told me "we only compare ourselves to the best". I am only making these criticisms because rationalists self-define as, well, rational. And to be, rationality also has to do with achieving something. Pedantry, sophistry &c are unwelcome distractions.
I apologize for assuming you meant something semi-reasonable by what you wrote, I will refrain from making that assumption in the future.
Okay, let's go into "talking to a 5yo mode". We have these facts: a) the vast majority of people use "gender inequality" to refer to the fact that women are disadvantaged. b) terms like this are defined by common usage. c) since common usage means "women are disadvantaged", the reasonable think to do is that when a random person utters the phrase, they refer to that. Whether women are in fact disadvantaged doesn't matter. What matters is what information I was trying to convey. I used a common phrase. It's not rocket science.
And why would this be obviously desirable? I didn't say it would be. I said it would mean feminists would have to admit Trump did well by women.
So "women are more equal than men" it is. I have not done an extensive analysis to see in which fields men are disadvantaged and in which fields women are, then weighted them by importance to determine what's the fact here. I assume that neither have you. So to be overly aggressive with people who believe in the common knowledge that women are disadvantaged (again, even if that isn't so), is not productive. It's pedantic, juvenile. It doesn't achieve anything. If you just want to shout "MEN ARE OPPRESSED!!!", fine. Don't be surprised when no-one takes you seriously.
I was being facetious, of course I still believe in rationality. But you know, I was reading Slate Star Codex, which basically represents the rationalist community as an amazing group of people committed to truth and honesty and the scientific approach - and though I appreciate how open these discussions are, I am a bit disappointed at how pedantic some of the comments are.
Jesus Christ. This is beyond derailed. For what it's worth, gjm is right, people are either purposefully misrepresenting what I wrote (in which case they are pedantic and juvenile) or they didn't understand what I meant (in which case, you know, go out and interact with people outside your bubble).
And anyway - the reason I want to measure progress towards closing the gap where women have it worse is so that I can fairly evaluate feminist arguments about Trump in 4 years time. If in 4 years time it turns out that women earn more than men across the board, that >50% of governors are women and that women are CEOs of like 80% of the Fortune 500, you will be able to say "rhetoric aside, it looks like Trump actually helped women".
Going for "aha! Trump improved men's lot in these fields where they were disadvantaged" will only increase polarisation. Maybe worth tracking, in the name of truth and science; but again, not what I was going for.
Guys, come on. I am not setting up a formal tribunal for Trump. I want your measured opinions. Don't let's be pedantic.
Unfortunately, I cannot read minds.
But you can read, right? Because I wrote "I'd like to ask for suggestions on proxies for evaluating [...]". I didn't say "I want suggestions on how to go about deciding the suitability of a metric".
And I am not saying that I agree with that majority view. All I am saying is that since you know that, to sort of pretend that it's not the case is a bit strange.
You in particular did provide metrics, so I am not complaining! Although, to be perfectly honest, I do think your delivery is sort of passive aggressive or disingenuous... you know that nearly everyone, when discussing gender inequality, use the term to mean that women are disadvantaged. You provide metrics to evaluate improvement in areas where men are disadvantaged - i.e. your underlying assumption/hypothesis is the opposite of everyone else, but you don't acknowledge it.
Regardless of what I do, I expect the program to provide a response at the end. Like I said in response to another comment - if you want to "debug" my thinking process, absolutely fair enough; but provide the result. What you are doing, to carry on your analogy, is to say "hmm there may be a bug there. But I won't tell you what the program will give as an output even if you fix it".
Even worse, imagine your compsci professor asks you to write code to simulate objects falling from a skyscraper. What you are doing then here is telling me "aaah, but you are trying to simulate this using gravity! That is, of course, not a universal solution, so you should try relativity instead".
Of course, you have the right to do whatever you want. But, if someone new to a group of rationalists asks a question with a clear expectation for a response, and gets philosophising as an answer, don't be surprised if people get a perhaps unflattering view of rationalists.
This is actually the correct response.
And this is what I mean when I say rationalists often seem to be missing the point. Fair enough if you want to say "here is the right way to think about it... and here are the metrics this method produces, I think".
But if all I get is "hmmm it might be like this or it might be like that - here are some potential flaws in our logic" and no metrics are given... that doesn't do any good.
Done! Thanks.
because I thought you were saying that you can't find any grounds for moral disapproval of massive defamation campaigns
Yes, I meant I couldn't find grounds for disapproval of defamation under a libertarian system.
On discrimination, your argument is very risky. For example, in a racist society, a person's race will impact how well they do at their job. Besides, on a practical level, it's very hard to determine what characteristics actually correlate with performance.
Are you quite sure you aren't just saying this because it's something that doesn't fit with the position you're committed to? That's a bit unfair - I readily admitted the weakness in my whole theory re property rights. The problem with externalities like pollution is that it is difficult to say at what point something hurts someone to a significant extent, because "hurting someone" is not particularly well defined. Similarly for non-physical violence (e.g. bullying), and to an extent, this applies to defamation too.
OK. But if you hold that there's a way of finding out what these values are, then doesn't that call into question the impossibility of getting everyone to agree about them? (Which is a key step in your argument.) It seems as if the argument depends on its own failure!
Not clear on what you mean here... could you paraphrase please?
(And since this is a rationalist forum, let me just point out that...
- Personal opinion, everything else pertains to politics, and is kind of pointless if not;
- Yeah, so? Unless lesswrong.com is specifically designed for you, that's a bizarre comment;
- Again, very specious argument. You can apply it to literally everything ever written anywhere on the internet.
- Anecdotal evidence, inadmissible.)
I am actually looking for criteria to evaluate any president. I only wrote Trump because it's whom I had in mind, obviously. Can I edit my own article?
I was exaggerating a bit - but I am sure you agree that your criteria are too few and unimportant to judge a whole presidency...
I will gently suggest that you should maybe see this as a deficiency in the ethical framework you're working in...
All this does is weaken my argument for libertarianism, not my model for evaluating moral theories! Let's not conflate the two.
the evils of government coercion / starving to death... To be clear - it's not exactly the government coercion that bothers me. It's that criminalising discrimination is... just a bit random. As an employer, I can show preference for thousands of characteristics, and rationalise them (e.g. for extroverts - "I want people who can close sales") but not gender/race/age? It's a bit bizarre.
statistically likely to cause physical harm This is the subject of another post I want to write, and will do when I have time - I think the important thing here is the intent. But let's discuss this in more detail in another post!
pollution This is tricky, as many negative externalities are. To be honest, I'd say this falls into the category of "issues we cannot deal with because the tools in our disposable, such as language, are not precise enough", much like abortion. I think no moral theory would ever give you solid guidance on such matters.
there aren't any objective values Fair enough. My approach is predicated on the existence of values. If you want to say there is no such thing, absolutely fine by me - as long as you (and by you here I mean "one" - based on this conversation, I don't think this applies to you specifically!) are not sanctimonious about your own morals.
(but note that you can still use my framework to rank theories - even if no theory is actually the correct one, you can have degrees of failure - so a theory that's not even internally consistent is inferior to others that are).
Question - how do you do this thing with the blue line indicating my quote?
For L1: well, I am not sure how to say this - if we agree there are no universal values, by definition there is no value that permits you to infringe on me, right?
On your examples...
1 ==> okay, here you have discovered a major flaw in my theory which I had just taken for granted: property rights. I just (arbitrarily!) assumed their existence, and that to infringe on my property rights is to commit violence. This will take some thinking on my behalf.
2 ==> I am genuinely ambivalent about this. Don't get me wrong, if someone defamed me in real life, I would take action against them... but in principle, I cannot really come up with a reason why this would be immoral (at least, not a reason that wouldn't have other bad consequences if taken to its logical conclusion - i.e. criterion (c)!)
3 ==> here I am actually quite definitive: while I personally hate discrimination, I don't think it should be illegal. I think people should have the right to hire whomever they please for whatever reason they please. Again, I think the principle behind making discrimination illegal is very hard to justify - and to limit to the workforce.
4 ==> I would call that violence.
As for facts & values: the question for the people in the first camp you mention is, how do we determine what are the objectively right values? That's what I am trying to do through my three criteria. I don't think it's good philosophy to both say "there ARE right values but there is NO way of determining what they are".
Let me say again that when it comes to how I live my personal life, I also have values that do not necessarily meet my criteria, especially criterion (b). Some times I try to rationalise them by saying, like you, that they will lead me to the best outcomes. But really, they are just probably the result of my particular upbringing.
First, you wrote "Every question of major concern contains some element of evaluation, and therefore cannot be settled as a matter of objective fact" - if this does not mean to say "there are no facts", I am not sure what it is trying to say.
Second, this whole this is pertaining to the second criterion. My point is that rejecting this criterion, for whatever reason, is saying that you are willing to admit arbitrary principles - but these are by definition subjective, random, not grounded in anything. So you are then saying that it's okay for a moral theory to be based on what is, at the end of the day, personal preference.
Third, if this isn't your view, why bring it up? I don't think it's conductive to a discussion to say "well, I don't think so, but some people say..." If we all agree that this position is not valid, why bother with it? If you do think it's valid, then saying "it's not my view" is confusing.
OK that's not a well thought out response. So if Trump launches a nuclear war, or tanks the economy, or deports all Muslims &c, that's fine as long as he meets these 3 criteria?!
I am trying to list criteria by which to evaluate any president. I am not trying to set up Trump to fail - else I could just have "appoint a liberal Justice".
OK, serious response: if you don't want to admit the existence of facts, then the whole conversation is pointless - morality comes down to personal preference. That's fine as a conclusion - but then I don't want to see anyone who holds it calling other people immoral.