Comment by danarmak on Meaning and Moral Foundations Theory · 2018-04-08T18:01:48.154Z · score: 3 (1 votes) · LW · GW

Loyalty, authority, and fairness are also about other people. A lone person can't be loyal, authoritative, or fair; you have to be those things to someone else.

And, as I've been saying, Harm/Care is also about the conduct of the individual: do you harm others or care for them?

Comment by danarmak on Meaning and Moral Foundations Theory · 2018-04-08T17:20:04.863Z · score: 3 (1 votes) · LW · GW
"I can't learn the material for you" as opposed to "if you want to climb Mt Everest, you have to do it for yourself rather than for someone else".

I'm not sure I understand the difference, can you make it more explicit?

"I can't learn the material for you": if I learn it, it won't achieve the goal of you having learned it, i.e. you knowing the material.

"I can't climb the mountain for you": if I climb it, the prestige and fun will be mine; I can't give you the experience of climbing the mountain unless you climb it yourself.

The two cases seem the same...

if people care about others being pure, it seems they can just as easily care about others being caring. And that we should think about people trying to observe the norm of caring and making sure others do, rather than trying to care effectively. Is that right?

Yes, that's what I think is happening: people observing norms and judging others on observing them, rather than on achieving goals efficiently or achieving more. Consequentially, we want to save everyone. Morally, we don't judge people harshly for not saving everyone as long as they're doing their best - and we don't expect them to make an extraordinary effort.

And so, I don't see a significant difference between Harm/Care and the other foundations.

Comment by danarmak on Is Rhetoric Worth Learning? · 2018-04-08T16:39:46.079Z · score: 3 (1 votes) · LW · GW

I did not mean to misrepresent what lawyers do (or are allowed to do). I noted they are restricted by lawyer ethics, but that was in a different comment than the one you replied to. Yes, absolutely, they not supposed to lie or even deliberately mislead, and a lawyer's reputation would suffer horribly if they were caught in a lie.

I'm not sure I understand people who aren't OK with ethical lawyers, as a concept. Is there something they would like instead of lawyers? (See: my other comment.) Or do they feel that lawyers are immoral by association with injustice - the intuition of "moral contagion" (I forget the correct term) that someone who only partially fixes a moral wrong, is worse than someone who doesn't try to fix it at all?

Comment by danarmak on Meaning and Moral Foundations Theory · 2018-04-08T16:30:34.944Z · score: 3 (1 votes) · LW · GW
Harm/Care is unusual among the foundations in that it's other-directed. The goal is to help other people, and it does not especially matter how that occurs. [...] In contrast, the other foundations centre on the moral actor themselves. I cannot be just, loyal, a good follower, or pure for you.

It seems to me that Harm/Care isn't as different as you say. Native (evolved) morality is mostly deontological. The object of moral feelings is the act of helping, not the result of other people being better off. "The goal is to help other people" sounds like a consequentialist reformulation. Helping a second party to help a third party may not be efficient, but morality isn't concerned with efficiency.

In contrast, the other foundations centre on the moral actor themselves. I cannot be just, loyal, a good follower, or pure for you.

I could say: yes, I can be just *to* you, loyal *to* you, a good follower *of* you. And pure *for* you too - think about purity pledges, aka "save it *for* your future spouse".

In all these cases, morality is about performance - deontology - rather than about accomplishing a goal. But each case does have an apparent goal, so our System 2 can apply consequentialist logic to it. Why do you treat Harm/Care differently?

Comment by danarmak on Is Rhetoric Worth Learning? · 2018-04-07T19:44:36.225Z · score: 14 (4 votes) · LW · GW

I think my definition of rhetoric is the same as OP's: namely, the art of shaping words or a speech to be beautiful, moving, convincing, or otherwise effective. How to best verbally convince others of an idea: I think that's a useful term.

In particular the OP referred to dispositio (concise, addressing the right points) and pronuntiatio (body language and delivery).

I’m not convinced this is true.

I'm not sure what exactly you're not convince of. That speech is much more effective when its form is liked as well as its object level claims?

Comment by danarmak on Is Rhetoric Worth Learning? · 2018-04-07T19:34:21.793Z · score: 2 (1 votes) · LW · GW
I don’t think that’s true. Lots of people are bothered by this. Maybe you’re right, maybe a majority is unbothered, but this is interesting only to the extent that it doesn’t embody a larger pattern of what proportion of people care about injustice.

I agree that most people are bothered by anything they perceive as injustice. But if they don't know a way to make things better, or what things being better would look like, then they tend not to blame e.g. lawyers for participating in the system and being good at it.

Is there a better way of doing things, that lots of people would prefer to be the case? Not just "I wish judges applied the law fairly and for Justice" - then you might as well wish for people not to commit crimes in the first place. But a system that would work when being gamed by people desperate not to go to jail?

Alternatively, is there a relevant moral principle that people can follow unilaterally that would make the world a better place (other than deontologically)? If we tell a defendant not to hire a lawyer, or a lawyer not to argue as well as they can (while keeping to lawyer ethics), or the jury not to listen to the lawyers - then the side that doesn't cooperate will win the trial, or the jury will ignore important claims, and justice won't be better served on average.

Comment by danarmak on Is Rhetoric Worth Learning? · 2018-04-07T19:05:36.095Z · score: 4 (2 votes) · LW · GW
Are you sure? I’ve met a lot of people (“average people”, not rationalists) who take the view of “yeah, he can talk real impressively, but it’s all bullshit, no doubt”. Many people like “simple talk”, i.e. speech that simply lays out facts, and are suspicious of impressive/skillful rhetoric.

I've met fewer people like that, but then I'm not a native English speaker, so not all of the speechifying I'm exposed to is in English.

It sounds like the thing being described is in part a desire for the speaker to talk in a particular dialect and style, associated with a social class or background, with the appropriate choice of acrolect/mesolect/basilect, etc.

Do you think such people are unaware that the "plain" speakers they like still harness rhetoric, just one tailored to that audience? Good plain speech still needs to be concise, address the right points, have good body language, good delivery (e.g. not stutter or repeat yourself), and in the end say things that the audience will like as speech as well as on the object level. An untrained speaker will rarely carry an audience, however plainly they speak.

Comment by danarmak on Is Rhetoric Worth Learning? · 2018-04-07T11:47:13.720Z · score: 25 (8 votes) · LW · GW

This seems to depend a lot on social context.

Lawyers are the quintessential speakers-for-hire who apply rhetoric to mercenary causes. Yet lawyers are accepted and high-status in most parts of society. In play, debate clubs are often popular; and at a recent LW meetup we played an Ideological Turing Test game, where we had to convince each other of positions we didn't always hold.

Lawyers don't hide the fact they're biased and mercenary in court. Maybe that helps them disassociate enough from the practice that people don't feel uncomfortable with them in a personal setting. And yet, most people are not bothered by the idea that the justice system runs in part on rhetoric persuasiveness, or that the wider political system runs in part on politicians convincing voters of things. People compliment politicians on their public speaking skills without thinking "dark arts!" every time.

In general, the average person's reaction to "wow what a skilled orator" isn't "therefore I won't listen to them, persuasiveness is orthogonal to truth". How do you reconcile this with your analysis of "betraying cooperative norms", which people are usually good at enforcing?

Note: I'm not well familiar with modern lawyers and exactly how important rhetoric is to them. In classical antiquity it was extremely important; that is relevant insofar as it partially explains why rhetoric is present in today's classical liberal education.

Comment by danarmak on The worst trolley problem in the world · 2018-02-27T18:59:55.392Z · score: 1 (1 votes) · LW · GW

Unlike the regular trolley problem, this one is similar to moral choices made by many people every day.

As you walk down the street, you see a man point a gun at another. Do you interpose your body to save the stranger's life at the cost of your own?

Down another street, a man points a gun at you. There's a bystander nearby. Do you hide behind this human shield to save your life?

Some people will view this question differently from the trolley one because the attacker is a human and can be assigned blame. Substitute a grisly bear that is attacking a group of campers. Do you shield another with your body, or conversely pull another's body in front of yours?

Comment by danarmak on The dark arts: Examples from the Harris-Adams conversation · 2017-07-23T11:21:41.302Z · score: 2 (2 votes) · LW · GW

I agree, and I didn't mean to imply otherwise.

Comment by danarmak on The dark arts: Examples from the Harris-Adams conversation · 2017-07-22T15:17:14.592Z · score: 0 (0 votes) · LW · GW

The correlation between moral objectivism and interventionism is probably true, but I think it's historically contingent, and not a logical consequence of objectivism. Whether or not I think of my morality as objective (universal) or subjective (a property of myself), that's orthogonal to what I actually think is moral.

I'm a moral relativist. My morality is that torture and murder are wrong and I am justified and, indeed, sometimes enjoined to use force to stop them. I don't think this is an uncommon stand.

Other people are moral objectivists, but their actual morals may tell them to leave others alone except in self-defense.

Comment by danarmak on The dark arts: Examples from the Harris-Adams conversation · 2017-07-22T14:51:41.250Z · score: 8 (8 votes) · LW · GW

I haven't listened to the debate (I'd read it if it was transcribed), but I want to object to a part of your post on a meta level, namely the part where you say:

To me, he is very far from a model for a rationalist

Being able to effectively convince people, to reliably influence their behavior, is perhaps the biggest general-purpose power a human can have. Don't dismiss an effective arguer as "not rationalist". On the contrary, acknowledge them as a scary rationalist more powerful than you are.

The word "rationalist" means something fairly narrow. We shouldn't make it into an applause light, a near synonym of "people we like and admire and are allied with". Being reliably effective, on the other hand, is a near synonym of being rational(ist).

If Adams employed "dark arts" in his debate, the only thing that necessarily means is that he wasn't engaged in an honest effort to discover the truth. But that's not news - it was a public debate staged in order to convince the audience! So Adams used a time-honored technique of achieving this goal - how very rational of him. At least, it's rational if he succeeded, and I assume you think he did succeed in convincing some of the audience, otherwise you wouldn't bother to post a denunciation.

Similarly, the name "Dark Arts" is misleading. They are (if I may channel Professor Quirrell for a moment) extremely powerful Arts everyone should cultivate if they can, and use where appropriate: not when honestly conversing with a fellow rationalist to discover the truth, but when aiming to convince people who are not themselves trained in rationality, and who (in your estimation) will not come by their beliefs rationally, whether or not they end up believing the truth.

This is a near cousin of politics (in the social sense, not the government sense). Politics is a mind-killer and it's important to keep politics-free spaces for various purposes including the pursuit of truth. But we should not say "rationalists should not engage in politics", any more than "rationalists should never try to convince non-rationalists of anything".

ETA: I'm not claiming Adam is a rationalist or is good at being a rationalist; I'm not familiar enough with him to tell. I'm only claiming that the fact he is or tries to be a good persuader in a debate and uses Dark Arts isn't evidence that he isn't.

Comment by danarmak on How long has civilisation been going? · 2017-07-22T14:16:27.439Z · score: 1 (1 votes) · LW · GW

I feel the briefness of history is inseparable from its speed of change. Once the agricultural revolution got started, technology kept progressing and we got where we are today quite quickly - and that's despite several continental-scale collapses of civilization. So It's not very surprising that we are now contemplating various X-risks: to an external observer, humanity is a very brief phenomenon going back, and so it's likely to be brief going forward as well. Understanding this on an intuitive level helps when thinking about the Fermi paradox or the Doomsday Argument.

Comment by danarmak on How To Build A Community Full Of Lonely People · 2017-05-20T19:22:48.491Z · score: 0 (0 votes) · LW · GW

I think you're mostly right about that, but not entirely. The two realms are not so clearly separated. There are social hangouts on the Internet. There are social hangouts, of both kinds, where people talk shop. There are programming blogs and forums where social communities emerge. And social capital and professional reputation feed into one another.

Comment by danarmak on The robust beauty of improper linear models · 2017-05-20T16:35:16.923Z · score: 2 (2 votes) · LW · GW

So that's the real role of the expert here

I work in the data science industry - as a programmer, not a data scientist or statistician. From my general understanding of the field what you're describing is a broadly accepted assumption. But I might be misled by the fact that the company I work for bases its product on this assumption, so I'm not sure if you're just describing this thing from another angle or if there's a different point that I'm missing here or if, in fact, many people spend too much effort trying to hand-tune models.

The data scientists I work with make predictive models in two stages. The first one is to invent (or choose) "features", which include natural variables from the original dataset, functions acting on one or more variables, or supplementary datasets that they think are relevant. The data scientist applies their understanding of statistics as well as domain knowledge to tell the computer which things to look for and which are clearly false positives to be ignored. And the second stage is to build the actual models using mostly standard algorithms like Random Forest or XGBoost or whatnot, where the data scientist might tweak arguments but the underlying algorithm is generally given and doesn't allow for as much user choice.

A common toy example is the Titanic dataset. This is a list of passengers on the Titanic, with variables like age, name, ticket class, etc.. The task is to build a model that predicts which ones survived when the ship sank. A data scientist would mostly work on feature engineering, e.g. introducing a variable that deduces passenger's sex from their name, and focus less on model tuning, e.g. determining the exact weight that should be given to the feature in the model (women and children had much higher rates of survival).

In a more serious example, a data scientist might work on figuring out which generic datasets are relevant at all. Suppose you're trying to predict where to best open a new Starbucks branch. Should you look at the locations of competing coffee shops? Noise from nearby construction? Public transit stops or parking lots? Nearby tourist attractions or campuses or who knows what else? You can't really afford to look at everything, it would both take too long (and maybe cost too much) and risk false positives. A good domain expert is the one who generates the best hypotheses. But to actually test those hypotheses, you use standard algorithms to build predictive models, and if a simple linear model works, that's a good thing - it shows your chosen features were really powerful predictors.

Comment by danarmak on How To Build A Community Full Of Lonely People · 2017-05-20T16:04:01.041Z · score: 1 (1 votes) · LW · GW

Even computer programmers who spent the majority of their working output working alone can benefit a lot from having good connections when it comes to finding good jobs.

People skills have great value for programmers, and finding jobs is a very small part of it. I write this from personal experience.

Programmers are still people. The amount of great software any one person can write in their lifetime is very limited. Teaching or convincing others (from coworkers to the rest of the world) to agree with you on what makes software great, to write great software themselves, and to use it, are the greatest force multipliers any programmer can have, just like in most other fields.

Sometimes there are exceptions; one may invent a new algorithm or write some new software that everyone agrees is great. But most of the time you have to convince people - not just less-technical managers making purchasing decisions, but other programmers who don't think that global mutable state is a problem, really, it worked fine in my grandfather's time and it's good enough for me.

Comment by danarmak on Don't Shoot the Messenger · 2017-05-20T15:10:56.499Z · score: 1 (1 votes) · LW · GW

I don't understand that viewpoint for a different reason. Suppose you believe the world will be destroyed soon. Why is that a reason not to have children? Is it worse for the children to live short but presumably good lives than not to live at all?

Comment by danarmak on Stupidity as a mental illness · 2017-02-12T17:52:20.122Z · score: 1 (1 votes) · LW · GW

The post doesn't say that all religion is stupidity. It says that one of the things we cal stupidity is subconscious conditioning, and one of the common case of such conditioning is religion. A subset of religion and a subset of stupidity, intersecting. Do you think that's wrong?

Comment by danarmak on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-11T20:37:04.006Z · score: 1 (1 votes) · LW · GW

Why do you assume any of this?

If our universe is test simulation, it is a digital experiment to test something,

That's a tautology. But if you meant "if our universe is a simulation" then why do you think it must be a a test simulation in particular? As opposed to a research simulation to see what happens, or a simulation to make qualia because the simulated beings's lives have value to the simulators, or a simulation for entertainment value, or anything else.

if it include AI, it is probably designed to test AI behaviour by putting it in complex moral dilemmas.

Maybe the desired outcome from the simulators' point of view is to develop a paperclipping AI that isn't swayed by human moral arguments. Maybe the simulation is really about the humans, and AIs are just inevitable byproducts of high-tech humans. There are lots of maybes. Do you have any evidence for this, conditional on being a simulation?

Comment by danarmak on Stupidity as a mental illness · 2017-02-11T16:22:17.171Z · score: 2 (2 votes) · LW · GW

I was under the impression that two generations ago, Freudian psychotherapy was all the rage and pretty much universal in certain high-status social circles? Of course, it probably didn't help anyone much. I think that "there's something mentally wrong with many/most people, maybe even everyone by default" has existed for decades as a common belief in some places.

Comment by danarmak on Stupidity as a mental illness · 2017-02-11T16:14:56.573Z · score: 2 (2 votes) · LW · GW

Another reason to find a cure for stupidity first, then.

Comment by danarmak on Stupidity as a mental illness · 2017-02-11T16:14:02.036Z · score: 1 (1 votes) · LW · GW

"Stupidity is a mental illness" is the only scary label. But it is a REALLY SCARY label.

That's the point of this post, I think.

Mental illness is a very scary label - because it's a terrible thing to be. And we should work hard on being able to cure mental illness.

Stupid is an equally terrible thing to be - terrible to yourself and to your friends and to society at large. We should work just as hard on making people not-stupid as we do on making them not-depressed. But we don't actually work hard on that, and that's a real problem.

Comment by danarmak on Stupidity as a mental illness · 2017-02-11T16:11:34.551Z · score: 3 (3 votes) · LW · GW

Do you think it's factually untrue, or normatively wrong, or something?

Comment by danarmak on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-11T16:03:37.571Z · score: 2 (2 votes) · LW · GW

I think your argument (if true) would prove too much. If we admit your assumptions:

  1. Clearly, the universe as it is fits A-O's goals, otherwise A-O would have intervened and changed it already.
  2. Anything we (or the new AI) do to change the universe must align with A-O's goals to avoid conflict.
  3. Since we do not assume anything about A-O's goals or values, we can never choose to change the universe in one direction over its opposite. Humans exist, A-O must want it that way, so we will not kill them all. Humans are miserable, A-O must want it that way, so we will not make them happy.

Restating this, you say:

If the superintelligence is actually as powerful as it is, yet chooses to allow humans to exist, chances are that humans serve its purposes in some way. Therefore, in a very basic sense, the Alpha Omega is benevolent or friendly to humans for some reason.

But you might as well have said:

If the superintelligence is actually as powerful as it is, yet chooses to allow humans to keep suffering, dying, and torturing and killing one another, chances are that human misery serve its purposes in some way. Therefore, in a very basic sense, the Alpha Omega is malevolent or unfriendly to humans for some reason.

Comment by danarmak on Open thread, Nov. 14 - Nov. 20, 2016 · 2016-11-21T18:23:03.677Z · score: 1 (1 votes) · LW · GW

Or this: Do you think the fact that a president-elect does this has any harmful effect on other people's behaviour?

That was, in fact, what I meant.

Comment by danarmak on Open thread, Nov. 14 - Nov. 20, 2016 · 2016-11-19T22:41:15.102Z · score: 1 (1 votes) · LW · GW

That doesn't change the fact that he's willing to publically say things that are generally understood as signals for racism.

Do you think the fact he does this is significantly harmful?

Comment by danarmak on Open thread, Nov. 14 - Nov. 20, 2016 · 2016-11-19T16:45:14.598Z · score: 0 (0 votes) · LW · GW

First you misrepresent Scott Alexander's post. Scott didn't write that the media invented the narrative that Trump is racist.

You're right, he didn't. I don't know who invented it - maybe it's always been around. Scott merely said that the media promote it and make it popular. I'll amend my post.

Trump himself came up with it because it was a good way to get attention. Trump purposefully spoke about how Mexico sends rapists to create that narrative. At least that's the version if you think Trump has at least a tiny shred of awareness of the moves he makes.

I didn't follow Trump's campaign. If you're talking about something other than Scott's point 6 (What about Trump’s “drugs and crime” speech about Mexicans?) then I don't know about it. Scott apparently couldn't find anything Trump said during his campaign that would make him out to be clearly racist. Do you think he's just wrong about this?

I don't remember anybody in the rationality community attacking Trump based on the theory that the main problem with him is that he's racist.

Not the main problem, no. I had the impression that many denunciations of Trump included "racist" in the general litany of accusations, but now I'm not so sure. The only thing I could find in five minutes is that Scott Aaronson called Trump a "racist lunatic", and that wasn't even in his main post on Trump, but as an aside. So yes, you're right about this.

On the other hand describing interaction of Trump with the mob isn't profitable.

I was thinking less about concrete past actions like that, and more about the character traits Scott listed that I quoted: "incompetent thin-skinned ignorant boorish fraudulent omnihypocritical demagogue". Most of these seem just fine as attack narratives for the media. Maybe they just didn't catch on, or the "market" tended towards a single simple narrative dominating.

the characterization of Trump's ghostwriter who spend 1 1/2 years with him provides valuable information about his character

That sounds valuable, at least if we can be certain that he's speaking up due to personal convictions and has no hidden interests or biases. I've now read the New Yorker article about him. (Like I said, I tried not to follow the US election cycle.)

Thanks for correcting me about the above.

Comment by danarmak on Open thread, Nov. 14 - Nov. 20, 2016 · 2016-11-17T11:02:00.070Z · score: 7 (9 votes) · LW · GW

Scott Alexander posted You Are Still Crying Wolf, with disabled comments, so I'm asking this here. I would make this a Discussion post but don't want to disrupt too much the LW norm of not discussing US politics.

His thesis is that Trump is not racist any more than any other US president in the last few decades. And that the (anti-Trump) media invented (edit: or at least promoted) this charge, convinced a lot of other people, maybe convinced itself, and never stopped (and probably won't stop in the future) because no-one would want to question or ask for evidence of Trump being Bad. All this sounds to me plausible and supported by the linked evidence.

Scott then says that Trump is "an incompetent thin-skinned ignorant boorish fraudulent omnihypocritical demagogue", etc. He'd like the media to accuse Trump of this and not of being racist.

Question: how certain are you, and why, that these charges are much more true than the one about racism? Do they not come from, or at least via, the same media sources?

I'm taking the outside view. I'm not American and most of what I know about Trump comes from denunciations in the online rationalist community. So when I hear an admission that many of the charges against Trump were lies (and that Scott didn't want to draw attention to this before the election), I must update towards thinking any other given charge is likely to be a lie too.

Comment by danarmak on VARIABILITY OF NUCLEAR DECAY RATES · 2016-11-14T18:24:51.491Z · score: 0 (0 votes) · LW · GW

Seems like a variable like this wouldn't be modeled in a sim...?

I don't know anything about physics, but if that were true, why not equally predict that lots of things that definitely exist wouldn't be included in a sim? Both the laws of physics and the actual universe seem to be a lot more complex than what's needed to simulate a classical Earth in a solar system.

Of course, that assumes you know what is and isn't of interest to the simulators, e.g. because it's an ancestor simulation.

Comment by danarmak on Open thread, October 2011 · 2016-10-18T19:39:39.934Z · score: 1 (1 votes) · LW · GW

Thank you, your point is well taken.

Comment by danarmak on Open thread, October 2011 · 2016-10-17T18:33:21.681Z · score: 0 (0 votes) · LW · GW

I'm not sure what you mean by conflict between individuals.

If you mean actual conflict like arguing or fighting, then choosing between donating to save five hungry people in Africa vs. two hungry people in South America isn't a moral choice if nobody can observe your online purchases (let alone counterfactual ones) and develop a conflict with you. Someone who secretly invents a way cure for cancer doesn't have moral reasons to cure others because they don't know he can and are not in conflict with him.

If you mean conflict between individuals' own values, where each hungry person wants you to save them, then every single decision is moral because there are always people who'd prefer you give them your money instead of doing anything else with it, and there are probably people who want you dead as a member of a nationality, ethnicity or religion. Apart from the unpleasant implications of this variant of utilitarianism, you didn't want to label all decisions as moral.

Comment by danarmak on Open thread, October 2011 · 2016-10-14T18:51:54.792Z · score: 0 (0 votes) · LW · GW

Some people think that any value, if it is the only value, naturally tries to consume all available resources. Even if you explicitly make a satisficing, non-maximizing value (e.g. "make 1000 paperclips", not just "make paperclips"), a rational agent pursuing that value may consume infinite resources making more paperclips just in case it's somehow wrong about already having made 1000 of them, or in case some of the ones it has made are destroyed.

On this view, all values need to be able to trade off one another (which implies a common quantitative utility measurement). Even if it seems obvious that the chance you're wrong about having made 1000 paperclips is very small, and you shouldn't invest more resources in that instead of working on your next value, this needs to be explicit and quantified.

In this case, since all values inherently conflict with one another, all decisions (between actions that would serve different values) are moral decisions in your terms. I think this is a good intuition pump for why some people think all actions and all decisions are necessarily moral.

Comment by danarmak on Barack Obama's opinions on near-future AI [Fixed] · 2016-10-13T23:19:20.108Z · score: 5 (7 votes) · LW · GW

Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!

ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.

Comment by danarmak on An attempt in layman's language to explain the metaethics sequence in a single post. · 2016-10-13T09:47:14.556Z · score: 1 (1 votes) · LW · GW

Technically, you could believe that people are equally allowed to be enslaved.

In a sense, the ancient Romans did believe this. Anyone who ended up in the same situation - either taken as a war captive or unable to pay their debts - was liable to be sold as a slave. So what makes you think your position is objectively better than theirs?

"All men are created equal" emerges from two or more basic principles people are born with. You might say: "Look, you have value, yah? And your loved ones? Would they stop having value if you forgot about them? No? They have value whether or not you know them? How did you conclude they have value? Could that have happened with other people, too? Would you then think they had value? Would they stop having value if you didn't know them? No? Well, you don't know them; do they have value?

This assumes without argument that "value" is something people intrinsically have or can have. If instead you view value as value-to-someone, i.e. I value my loved ones, but someone else might not value them, then there is no problem.

And it turns out that yes, most people did not have an intuition that anyone has intrinsic value just by virtue of being human. Most people throughout history assigned value only to ingroup members, to the rich and powerful, and to personally valued individuals. The idea that people are intrinsically valuable is historically very new, still in the minority today globally, and for both these reasons doesn't seem like an idea everyone should naturally arrive at if they only try to universalize their intuitions a bit.

Comment by danarmak on An attempt in layman's language to explain the metaethics sequence in a single post. · 2016-10-12T22:13:13.881Z · score: 0 (0 votes) · LW · GW

Do you think the idea is sufficiently coherent and non-self-contradictory that the way to find out if it's right or wrong is to look for evidence?

Yes, I think it is coherent.

Ideological Turing test: I think your theory is this: there is some set of values, which we shall call Morals. All humans have somewhat different sets of lower-case morals. When people make moral mistakes, they can be corrected by learning or internalizing some relevant truths (which may of course be different in each case). These truths can convince even actual humans to change their moral values for the better (as opposed to values changing only over generations), as long as these humans honestly and thoroughly consider and internalize the truths. Over historical time, humans have approached closer to true Morals, and we can hope to come yet closer, because we generally collect more and more truths over time.

the way to find out if it's right or wrong is to look for evidence?

If you mean you don't have any evidence for your theory yet, then how or why did you come by this theory? What facts are you trying to explain or predict with it?

Remember that by default, theories with no evidence for them (and no unexplained facts we're looking for a theory about) shouldn't even rise to the level of conscious consideration. It's far, far more likely that if a theory like that comes to mind, it's for due to motivated reasoning. For example, wanting to claim your morality is better by some objective measure than that of other people, like slavers.

by the way, understanding slavery might be necessary, but not sufficient to get someone to be against it. They might also need to figure out that people are equal, too.

That's begging the question. Believing that "people are equal" is precisely the moral belief that you hold and ancient Romans didn't. Not holding slaves is merely one of many results of having that belief; it's not a separate moral belief.

But why should Romans come to believe that people are equal? What sort of factual knowledge could lead someone to such a belief, despite the usually accepted idea that should cannot be derived from is?

Comment by danarmak on Barack Obama's opinions on near-future AI · 2016-10-12T21:46:58.189Z · score: 1 (1 votes) · LW · GW

IIRC that's what happened to me as well. I had a working post, then edited the description, and the link was gone and I couldn't bring it back.

Comment by danarmak on Barack Obama's opinions on near-future AI · 2016-10-12T15:50:56.051Z · score: 1 (1 votes) · LW · GW

Do you know what went wrong or what's the difference in making a working link post?

Comment by danarmak on Barack Obama's opinions on near-future AI · 2016-10-12T14:59:06.200Z · score: 1 (1 votes) · LW · GW

I don't see a link. Was it lost like in my link post on a different subject? I still don't know how to post links correctly.

Comment by danarmak on An attempt in layman's language to explain the metaethics sequence in a single post. · 2016-10-12T14:55:20.414Z · score: 4 (4 votes) · LW · GW

Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.

  1. No evidence is given for the central claim, that humans can and are converging towards a true morality we would all agree about if only we understood more true facts.
  2. We're told that people in the past disagreed with us about some moral questions, but we know more and so we changed our minds and we are right while they were wrong. But no direct evidence is given for us being more right. The only way to judge who's right in a disagreement seems to be "the one who knows more relevant facts is more right" or "the one who more honestly and deeply considered the question". This does not appear to be an objectively measurable criterion (to say the least).
  3. The claim that ancients, like Roman soldiers, thought slavery was morally fine because they didn't understand how much slaves suffer is frankly preposterous. Roman soldiers (and poor Roman citizens in general) were often enslaved, and some of them were later freed (or escaped from foreign captivity). Many Romans were freedmen or their descendants - some estimate that by the late Empire, almost all Roman citizens had at least some slave ancestors. And yet somehow these people, who both knew what slavery was like and were often in personal danger of it, did not think it immoral, while white Americans in no danger of enslavement campaigned for abolition.
Comment by danarmak on Open thread, October 2011 · 2016-10-12T14:02:14.254Z · score: 1 (1 votes) · LW · GW

I've been told that people use the word "morals" to mean different things. Please answer this poll or add comments to help me understand better.

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

[pollid:1165]

Comment by danarmak on Reasonable Requirements of any Moral Theory · 2016-10-12T13:32:43.903Z · score: 1 (1 votes) · LW · GW

So "morals" is used to mean the same as "values" or "goals" or "preferences". It's not how I'm used to encountering the word, and it's confusing in comparison to how it's used in other contexts. Humans have separate moral and a-moral desires (and beliefs, emotions, judgments, etc) and when discussing human behavior, as opposed to idealized or artificial behavior, the distinction is useful.

Of course every field or community is allowed to redefine existing terminology, and many do. But now, whenever I encounter the word "moral", I'll have to remind myself I may be misunderstanding the intended meaning (in either direction).

Comment by danarmak on Reasonable Requirements of any Moral Theory · 2016-10-11T11:32:50.914Z · score: 0 (0 votes) · LW · GW

I'm confused. Is it normal to regard all possible acts and decisions as morally significant, and to call a universal decision theory a moral theory?

What meaning does the word "moral" even have at that point?

Comment by danarmak on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T21:56:24.645Z · score: 0 (0 votes) · LW · GW

Of course not. Then you meant simply the success of the goals of the group's creators?

Comment by danarmak on Reasonable Requirements of any Moral Theory · 2016-10-10T21:54:20.036Z · score: 1 (3 votes) · LW · GW

The author says a moral theory should:

  • "Cover how one should act in all situations" (instead of dealing only with 'moral' ones)
  • Contain no contradictions
  • "Cover all situations in which somebody should perform an action, even if this “somebody” isn’t a human being"

In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.

Comment by danarmak on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T21:32:39.038Z · score: 0 (0 votes) · LW · GW

Is the 'success' of a group its number of members, regardless of actual activity?

Comment by danarmak on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T21:32:01.072Z · score: 0 (0 votes) · LW · GW

I agree there would probably only be one successful AGI, so it's not the first step of many. I meant it would be a step in that direction. Poor phrasing on my part.

Comment by danarmak on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T16:18:24.382Z · score: 3 (3 votes) · LW · GW

We don't have an AGI that doesn't kill us. Having one would be a significant step towards FAI. In fact, "a human-equivalent-or-better AGI that doesn't do anything greatly harmful to humanity" is a pretty good definition of FAI, or maybe "weak FAI".

Comment by danarmak on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T14:55:47.801Z · score: 2 (2 votes) · LW · GW

We do know it isn't an AI that kills us. Options b and c still qualify.

Comment by danarmak on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T14:54:19.378Z · score: 4 (4 votes) · LW · GW

Or possibly they are accurate measurements of the rates of Facebook use among these two groups. Maybe it's a good thing if people who are concerned about existential risk do serious things about it instead of participating in a Facebook group.

Comment by danarmak on Putanumonit - Discarding empathy to save the world · 2016-10-08T22:24:54.608Z · score: 0 (0 votes) · LW · GW

I think I understand your point better now, and I agree with it.

My conscious, deliberative, speaking self definitely wants to be rid of akrasia and to reduce time discounting. If I could self modify to remove akrasia, I definitely would. But I don't want to get rid of emotional empathy, or filial love, or the love of cats that makes me sometimes feed strays. I wouldn't do it if I could. This isn't something I derive from or defend by higher principles, it's just how I am.

I have other emotions I would reduce or even remove, given the chance. Like anger and jealousy. These can be moral emotions no less than empathy - righteous anger, justice and fairness. It stands to reason some people might feel this way about any other emotion or desire, including empathy. When these things already aren't part of the values their conscious self identifies with, they want to reduce or discard them.

And since I can be verbally, rationally convinced to want things, I can be convinced to want to discard emotions I previously didn't.

It's a good thing that we're very bad at actually changing our emotional makeup. The evolution of values over time can lead to some scary attractor states. And I wouldn't want to permanently discard one feeling during a brief period of obsession with something else! Because actual changes take a lot of time and effort, we usually only go through with the ones we're really resolved about, which is a good condition to have. (Also, how can you want to develop an emotion you've never had? Do you just end up with very few emotions?)

80% of data in Chinese clinical trials have been fabricated

2016-10-02T07:38:05.278Z · score: 6 (7 votes)

[LINK] Updating Drake's Equation with values from modern astronomy

2016-04-30T22:08:07.858Z · score: 7 (10 votes)

Meetup : Tel Aviv Meetup: solving anthropic puzzles using UDT

2015-07-20T17:37:37.359Z · score: 1 (2 votes)

Meetup : Tel Aviv Meetup: Social & Board Games

2015-07-01T17:53:21.516Z · score: 1 (2 votes)

When does heritable low fitness need to be explained?

2015-06-10T00:05:10.338Z · score: 17 (17 votes)

Meetup : Tel Aviv Meetup: Social & Board Games

2015-05-05T10:07:51.037Z · score: 1 (2 votes)

Meetup : Less Wrong Israel Meetup: Social and Board Games

2015-04-12T14:43:59.290Z · score: 1 (2 votes)

Meetup : Less Wrong Israel Meetup: Social and Board Games

2015-03-30T08:28:10.122Z · score: 2 (2 votes)

Meetup : Tel Aviv: Slightly Less Hard Problems of Consciousness

2015-03-15T21:07:49.159Z · score: 2 (2 votes)

Meetup : Less Wrong Israel Meetup: social and board games

2015-03-06T10:34:01.202Z · score: 1 (2 votes)

Meetup : Less Wrong Israel Meetup: Social and Board Games

2015-01-10T09:48:33.654Z · score: 2 (3 votes)

Meetup : Israel Less Wrong Meetup - Social, Board Games

2014-11-10T14:00:51.188Z · score: 1 (2 votes)

Meetup : Less Wrong Israel Meetup (Herzliya): Social and Board Games

2014-09-04T13:17:23.800Z · score: 1 (2 votes)

[LINK] Behind the Shock Machine: book reexamining Milgram obedience experiments

2013-09-13T13:20:44.900Z · score: 8 (11 votes)

Meetup : LessWrong Israel September meetup

2013-08-06T12:11:12.797Z · score: 0 (1 votes)

Meetup : Israel LW meetup

2013-06-25T15:44:39.851Z · score: 4 (5 votes)

Does evolution select for mortality?

2013-02-23T19:33:12.534Z · score: 12 (18 votes)

I want to save myself

2011-05-20T10:27:25.788Z · score: 20 (22 votes)

Choose To Be Happy

2011-01-01T22:50:56.697Z · score: 20 (33 votes)

Proposal: Anti-Akrasia Alliance

2011-01-01T21:52:31.760Z · score: 18 (21 votes)