Posts

Why is violence against AI labs a taboo? 2023-05-26T08:00:59.314Z
A rejection of the Orthogonality Thesis 2023-05-24T16:37:51.056Z
First impressions... 2017-01-24T15:14:38.022Z
Metrics to evaluate a Presidency 2017-01-24T01:02:21.629Z
Evaluating Moral Theories 2017-01-23T05:04:07.146Z

Comments

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T11:42:23.067Z · LW · GW

Successful attacks would buy more time though

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:59:23.681Z · LW · GW

I don't know! I've certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn't stop progress, but would slow it down - and if you think there is a chance alignment will be solved, the more time you buy the better.

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:57:51.331Z · LW · GW

I am bringing it up for calibration. As to whether it's the same magnitude of horrific: in some ways, it's higher magnitude, no? Even Nazis weren't going to cause human extinction - of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn't easily forgive a drunk driver who runs over a child...

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:55:09.774Z · LW · GW

Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence? 

They're not the same. None of these are extinction events; if preventing the extinction of the human race doesn't legitimise violence, what does? (And if you say nothing, does that mean you don't believe in the enforcement of laws?)

Basically, I can't see a coherent argument against violence that's not predicated either on a God, or on humanity's quest for 'truth' or ideal ethics; and the latter is obviously cut short if humans go extinct, so it wouldn't ban violence to prevent this outcome.
 

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:50:58.806Z · LW · GW

The assassination of Archduke Ferdinand certainly coerced history, and it wasn't state-backed. So did that of Julius Ceasar, as would have Hitler's, had it been accomplished. 

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:48:35.394Z · LW · GW

Well, it's clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI - in which case, every day they're not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made - or you think they're not making progress anyway, so why are you worried?

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:46:30.626Z · LW · GW

But  AI doomers do think there is a high risk of extinction. I am not saying a call to violence is right: I am saying that not discussing it seems inconsistent with their worldview.

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:45:33.169Z · LW · GW

That's not true  - we don't make decisions based on perfect knowledge. If you believe the probability of doom is 1, or even not 1 but incredibly high, then any actions that prevent it or slow it down are worth pursuing - it's a matter of expected value.

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:44:20.612Z · LW · GW

Except that violence doesn't have to stop the AI labs, it just has to slow them down: if you think that international agreements yada yada have a chance of success, and given this takes time, then things like cyber attacks that disrupt AI research can help, no?

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:42:31.788Z · LW · GW

If it's true AI labs aren't likely to be the cause of extinction, why is everyone upset at the arms race they've begun?

You can't have it both ways: either the progress these labs are making is scary - in which case anything that disrupts them (and hence slows them down even if it doesn't stop them) is good - or they're on the wrong track, in which case we're all fine.

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:40:30.297Z · LW · GW

Is all non-government-sanctioned violence horrific? Would you say that objectors and resistance fighters against Nazi regimes were horrific?

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T06:39:00.059Z · LW · GW

Here's my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don't sanction violence to prevent human extinction, when do you ever sanction it? (And I don't think it's stretching the definition to suggest that law requires violence).

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-27T05:21:44.152Z · LW · GW

But when you say extinction will be more likely, you must believe that the probability of extinction is not 1.

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-26T21:52:53.114Z · LW · GW

OK, so then AI doomers admit it's likely they're mistaken?

(Re side effects, no matter how negative they are, they're better than the alternative; and it doesn't even have to be likely that violence would work: if doomers really believe P(doom) is 1, then any action with a non-zero probability of success is worth pursuing.)

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-26T21:50:48.898Z · LW · GW

This is a pedantic comment. So the idea is you should obey the law even when the law is unjust?

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-26T19:05:22.174Z · LW · GW

Isn't the prevention of the human race one of those exceptions?

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-26T19:04:33.015Z · LW · GW

Er, yes. AI risk worriers think AI will cause human extinction . Unless they believe in God, surely all morality stems from humanity, so the extinction of the species must be the ultimate harm - and preventing it surely justifies violence (if it doesn't, then what does?)

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-26T10:59:14.423Z · LW · GW

Yes but what I'm saying is that this isn't true - few people are absolute pacifists. So violence in general isn't taboo - I doubt most people object to things like laws (which ultimately rely on the threat of violence).

So why is it that violence in this specific context is taboo?

Comment by ArisC on Why is violence against AI labs a taboo? · 2023-05-26T09:44:04.927Z · LW · GW

So, you would have advocated against war with Nazi Germany?

Comment by ArisC on A rejection of the Orthogonality Thesis · 2023-05-25T07:05:40.440Z · LW · GW

To be fair, I'm not saying it's obviously wrong; I'm saying it's not obviously true, which is what many people seem to believe!

Comment by ArisC on A rejection of the Orthogonality Thesis · 2023-05-25T07:04:59.104Z · LW · GW

But that's not general intelligence; general intelligence requires considering a wider range of problems holistically, and drawing connections among them. 

Comment by ArisC on A rejection of the Orthogonality Thesis · 2023-05-25T07:02:30.171Z · LW · GW

Not an explicit map; I'm raising the possibility that capability leads to malleable goals.

Comment by ArisC on A rejection of the Orthogonality Thesis · 2023-05-25T07:01:48.522Z · LW · GW

I don't see how this relates to the Orthogonality Thesis.

It relates to it because it's an explicit component of it, no? The point being that if there is only one way of general cognition to work, perhaps that way by default involves self-reflection, which brings us to the second point...

Do you believe that an agent which terminally values tiny molecular squiggles would "question its goals and motivations" and conclude that creating squiggles is somehow "unethical"?

Yes, that's what I'm suggesting; not saying it's definitely true; but it's not obviously wrong, either. Haven't read the sequence, but I'll try to find the time to do so - but basically I question the wording 'terminally values'. I think that perhaps general intelligence tends to avoid valuing anything terminally (what do we humans value terminally?)

I think reflective stability, as it is usually used on LW, means something more narrow than how you're interpreting it

Possibly, but I'm responding to its definition in the OT post I linked to, in which it's used to mean that agents will avoid making changes that may affect their dedication to their goals.

Comment by ArisC on First impressions... · 2017-01-27T03:37:41.838Z · LW · GW

Of course they are wrong. Because if you examine everything at the meta-level, and forget about being pragmatic, you will starve.

Comment by ArisC on First impressions... · 2017-01-26T04:38:42.486Z · LW · GW

I haven't posted the question there.

Comment by ArisC on First impressions... · 2017-01-26T03:40:06.887Z · LW · GW

For the love of... problem solved = the problem I asked for people to help me solve. I.e. finding metrics. If you don't want to help, fine. But as I said, being inane in attempt to appear smart is just stupid, counterproductive and frankly annoying.

Look, someone asks for your help with something. There are two legitimate responses: a) you actually help them achieve their goal or b) you say, "sorry, not my problem". Your response is to be pedantic about the question itself. What good does that do?

Comment by ArisC on First impressions... · 2017-01-26T00:25:18.070Z · LW · GW

My metrics are likely to be quite different from yours

And that's fine! If everyone here gave me a list of 5-10 metrics instead of pedantic responses, I'd be able to choose a few I like, and boom, problem solved.

Comment by ArisC on First impressions... · 2017-01-25T06:13:43.372Z · LW · GW

The job was, evaluate a presidency. What metrics would you, as an intelligent person, use to evaluate a presidency. How much simpler can I make it? I didn't ask you to read my mind or anything like that.

Comment by ArisC on First impressions... · 2017-01-25T05:17:00.786Z · LW · GW

It's easy to generate tons of metrics, what's hard is generating a relatively small list that does the job. If you are too lazy to contribute to the discussion, fine. But contributing just pedantic remarks is a waste of everyone's time.

Comment by ArisC on First impressions... · 2017-01-25T04:50:07.706Z · LW · GW

My parents always told me "we only compare ourselves to the best". I am only making these criticisms because rationalists self-define as, well, rational. And to be, rationality also has to do with achieving something. Pedantry, sophistry &c are unwelcome distractions.

Comment by ArisC on Metrics to evaluate a Presidency · 2017-01-25T04:47:29.307Z · LW · GW

I apologize for assuming you meant something semi-reasonable by what you wrote, I will refrain from making that assumption in the future.

Okay, let's go into "talking to a 5yo mode". We have these facts: a) the vast majority of people use "gender inequality" to refer to the fact that women are disadvantaged. b) terms like this are defined by common usage. c) since common usage means "women are disadvantaged", the reasonable think to do is that when a random person utters the phrase, they refer to that. Whether women are in fact disadvantaged doesn't matter. What matters is what information I was trying to convey. I used a common phrase. It's not rocket science.

And why would this be obviously desirable? I didn't say it would be. I said it would mean feminists would have to admit Trump did well by women.

So "women are more equal than men" it is. I have not done an extensive analysis to see in which fields men are disadvantaged and in which fields women are, then weighted them by importance to determine what's the fact here. I assume that neither have you. So to be overly aggressive with people who believe in the common knowledge that women are disadvantaged (again, even if that isn't so), is not productive. It's pedantic, juvenile. It doesn't achieve anything. If you just want to shout "MEN ARE OPPRESSED!!!", fine. Don't be surprised when no-one takes you seriously.

Comment by ArisC on First impressions... · 2017-01-25T03:55:12.342Z · LW · GW

I was being facetious, of course I still believe in rationality. But you know, I was reading Slate Star Codex, which basically represents the rationalist community as an amazing group of people committed to truth and honesty and the scientific approach - and though I appreciate how open these discussions are, I am a bit disappointed at how pedantic some of the comments are.

Comment by ArisC on Metrics to evaluate a Presidency · 2017-01-25T03:52:23.656Z · LW · GW

Jesus Christ. This is beyond derailed. For what it's worth, gjm is right, people are either purposefully misrepresenting what I wrote (in which case they are pedantic and juvenile) or they didn't understand what I meant (in which case, you know, go out and interact with people outside your bubble).

And anyway - the reason I want to measure progress towards closing the gap where women have it worse is so that I can fairly evaluate feminist arguments about Trump in 4 years time. If in 4 years time it turns out that women earn more than men across the board, that >50% of governors are women and that women are CEOs of like 80% of the Fortune 500, you will be able to say "rhetoric aside, it looks like Trump actually helped women".

Going for "aha! Trump improved men's lot in these fields where they were disadvantaged" will only increase polarisation. Maybe worth tracking, in the name of truth and science; but again, not what I was going for.

Comment by ArisC on Metrics to evaluate a Presidency · 2017-01-25T03:43:59.421Z · LW · GW

Guys, come on. I am not setting up a formal tribunal for Trump. I want your measured opinions. Don't let's be pedantic.

Comment by ArisC on First impressions... · 2017-01-25T03:42:40.249Z · LW · GW

Unfortunately, I cannot read minds.

But you can read, right? Because I wrote "I'd like to ask for suggestions on proxies for evaluating [...]". I didn't say "I want suggestions on how to go about deciding the suitability of a metric".

Comment by ArisC on First impressions... · 2017-01-25T00:54:14.530Z · LW · GW

And I am not saying that I agree with that majority view. All I am saying is that since you know that, to sort of pretend that it's not the case is a bit strange.

Comment by ArisC on First impressions... · 2017-01-24T23:17:08.162Z · LW · GW

You in particular did provide metrics, so I am not complaining! Although, to be perfectly honest, I do think your delivery is sort of passive aggressive or disingenuous... you know that nearly everyone, when discussing gender inequality, use the term to mean that women are disadvantaged. You provide metrics to evaluate improvement in areas where men are disadvantaged - i.e. your underlying assumption/hypothesis is the opposite of everyone else, but you don't acknowledge it.

Comment by ArisC on First impressions... · 2017-01-24T23:14:27.255Z · LW · GW

Regardless of what I do, I expect the program to provide a response at the end. Like I said in response to another comment - if you want to "debug" my thinking process, absolutely fair enough; but provide the result. What you are doing, to carry on your analogy, is to say "hmm there may be a bug there. But I won't tell you what the program will give as an output even if you fix it".

Even worse, imagine your compsci professor asks you to write code to simulate objects falling from a skyscraper. What you are doing then here is telling me "aaah, but you are trying to simulate this using gravity! That is, of course, not a universal solution, so you should try relativity instead".

Comment by ArisC on First impressions... · 2017-01-24T23:11:24.552Z · LW · GW

Of course, you have the right to do whatever you want. But, if someone new to a group of rationalists asks a question with a clear expectation for a response, and gets philosophising as an answer, don't be surprised if people get a perhaps unflattering view of rationalists.

Comment by ArisC on First impressions... · 2017-01-24T23:08:24.222Z · LW · GW

This is actually the correct response.

And this is what I mean when I say rationalists often seem to be missing the point. Fair enough if you want to say "here is the right way to think about it... and here are the metrics this method produces, I think".

But if all I get is "hmmm it might be like this or it might be like that - here are some potential flaws in our logic" and no metrics are given... that doesn't do any good.

Comment by ArisC on Metrics to evaluate a Presidency · 2017-01-24T15:07:07.652Z · LW · GW

Done! Thanks.

Comment by ArisC on Evaluating Moral Theories · 2017-01-24T15:04:59.057Z · LW · GW

because I thought you were saying that you can't find any grounds for moral disapproval of massive defamation campaigns

Yes, I meant I couldn't find grounds for disapproval of defamation under a libertarian system.

On discrimination, your argument is very risky. For example, in a racist society, a person's race will impact how well they do at their job. Besides, on a practical level, it's very hard to determine what characteristics actually correlate with performance.

Are you quite sure you aren't just saying this because it's something that doesn't fit with the position you're committed to? That's a bit unfair - I readily admitted the weakness in my whole theory re property rights. The problem with externalities like pollution is that it is difficult to say at what point something hurts someone to a significant extent, because "hurting someone" is not particularly well defined. Similarly for non-physical violence (e.g. bullying), and to an extent, this applies to defamation too.

OK. But if you hold that there's a way of finding out what these values are, then doesn't that call into question the impossibility of getting everyone to agree about them? (Which is a key step in your argument.) It seems as if the argument depends on its own failure!

Not clear on what you mean here... could you paraphrase please?

Comment by ArisC on Metrics to evaluate a Presidency · 2017-01-24T14:15:52.102Z · LW · GW

(And since this is a rationalist forum, let me just point out that...

  1. Personal opinion, everything else pertains to politics, and is kind of pointless if not;
  2. Yeah, so? Unless lesswrong.com is specifically designed for you, that's a bizarre comment;
  3. Again, very specious argument. You can apply it to literally everything ever written anywhere on the internet.
  4. Anecdotal evidence, inadmissible.)
Comment by ArisC on Metrics to evaluate a Presidency · 2017-01-24T14:12:46.353Z · LW · GW

I am actually looking for criteria to evaluate any president. I only wrote Trump because it's whom I had in mind, obviously. Can I edit my own article?

Comment by ArisC on Metrics to evaluate a Presidency · 2017-01-24T14:10:45.501Z · LW · GW

I was exaggerating a bit - but I am sure you agree that your criteria are too few and unimportant to judge a whole presidency...

Comment by ArisC on Evaluating Moral Theories · 2017-01-24T14:08:04.260Z · LW · GW

I will gently suggest that you should maybe see this as a deficiency in the ethical framework you're working in...

All this does is weaken my argument for libertarianism, not my model for evaluating moral theories! Let's not conflate the two.

the evils of government coercion / starving to death... To be clear - it's not exactly the government coercion that bothers me. It's that criminalising discrimination is... just a bit random. As an employer, I can show preference for thousands of characteristics, and rationalise them (e.g. for extroverts - "I want people who can close sales") but not gender/race/age? It's a bit bizarre.

statistically likely to cause physical harm This is the subject of another post I want to write, and will do when I have time - I think the important thing here is the intent. But let's discuss this in more detail in another post!

pollution This is tricky, as many negative externalities are. To be honest, I'd say this falls into the category of "issues we cannot deal with because the tools in our disposable, such as language, are not precise enough", much like abortion. I think no moral theory would ever give you solid guidance on such matters.

there aren't any objective values Fair enough. My approach is predicated on the existence of values. If you want to say there is no such thing, absolutely fine by me - as long as you (and by you here I mean "one" - based on this conversation, I don't think this applies to you specifically!) are not sanctimonious about your own morals.

(but note that you can still use my framework to rank theories - even if no theory is actually the correct one, you can have degrees of failure - so a theory that's not even internally consistent is inferior to others that are).

Comment by ArisC on Evaluating Moral Theories · 2017-01-24T11:58:38.738Z · LW · GW

Question - how do you do this thing with the blue line indicating my quote?

For L1: well, I am not sure how to say this - if we agree there are no universal values, by definition there is no value that permits you to infringe on me, right?

On your examples...

1 ==> okay, here you have discovered a major flaw in my theory which I had just taken for granted: property rights. I just (arbitrarily!) assumed their existence, and that to infringe on my property rights is to commit violence. This will take some thinking on my behalf.

2 ==> I am genuinely ambivalent about this. Don't get me wrong, if someone defamed me in real life, I would take action against them... but in principle, I cannot really come up with a reason why this would be immoral (at least, not a reason that wouldn't have other bad consequences if taken to its logical conclusion - i.e. criterion (c)!)

3 ==> here I am actually quite definitive: while I personally hate discrimination, I don't think it should be illegal. I think people should have the right to hire whomever they please for whatever reason they please. Again, I think the principle behind making discrimination illegal is very hard to justify - and to limit to the workforce.

4 ==> I would call that violence.

As for facts & values: the question for the people in the first camp you mention is, how do we determine what are the objectively right values? That's what I am trying to do through my three criteria. I don't think it's good philosophy to both say "there ARE right values but there is NO way of determining what they are".

Let me say again that when it comes to how I live my personal life, I also have values that do not necessarily meet my criteria, especially criterion (b). Some times I try to rationalise them by saying, like you, that they will lead me to the best outcomes. But really, they are just probably the result of my particular upbringing.

Comment by ArisC on Evaluating Moral Theories · 2017-01-24T11:46:26.419Z · LW · GW

First, you wrote "Every question of major concern contains some element of evaluation, and therefore cannot be settled as a matter of objective fact" - if this does not mean to say "there are no facts", I am not sure what it is trying to say.

Second, this whole this is pertaining to the second criterion. My point is that rejecting this criterion, for whatever reason, is saying that you are willing to admit arbitrary principles - but these are by definition subjective, random, not grounded in anything. So you are then saying that it's okay for a moral theory to be based on what is, at the end of the day, personal preference.

Third, if this isn't your view, why bring it up? I don't think it's conductive to a discussion to say "well, I don't think so, but some people say..." If we all agree that this position is not valid, why bother with it? If you do think it's valid, then saying "it's not my view" is confusing.

Comment by ArisC on Metrics to evaluate a Presidency · 2017-01-24T10:44:55.801Z · LW · GW

OK that's not a well thought out response. So if Trump launches a nuclear war, or tanks the economy, or deports all Muslims &c, that's fine as long as he meets these 3 criteria?!

I am trying to list criteria by which to evaluate any president. I am not trying to set up Trump to fail - else I could just have "appoint a liberal Justice".

Comment by ArisC on Evaluating Moral Theories · 2017-01-24T10:12:02.951Z · LW · GW

OK, serious response: if you don't want to admit the existence of facts, then the whole conversation is pointless - morality comes down to personal preference. That's fine as a conclusion - but then I don't want to see anyone who holds it calling other people immoral.