Posts

Comments

Comment by Dues on On my AI Fable, and the importance of de re, de dicto, and de se reference for AI alignment · 2024-06-24T06:30:57.386Z · LW · GW

Continuing the thread from here: https://deathisbad.substack.com/p/ea-has-a-pr-problem-in-that-it-cares/comments

I agree with you that an AI programmed exactly as the one you describe is doomed to fail. What I didn't understand is why you think any AI MUST be made that way.

Some confusions of mine: -There is not a real distinction between instrumental and terminal goals in humans. This seems not true to me? I seem to have terminal goals\desires, like hunger and instrumental goals, like going to the store to buy food. Telling me that terminal goals don't exist seems to prove too much. Are you saying that complex goals like "Don't let humanity die" in humans brains are in practice instrumental goals made up of simpler desires?

-Becuase humans don't 'really' have terminal goals, it's impossible to program them into AIs. ?

-AI's can't be made to have 'irrational' goals, like caring about humans more than themselves. This also seems to prove that humans don't exist? Can't humans care about their children more than themselves? AI's couldn't be made to think of humans as valuable as humans think of their children? Or more?

To choose an inflammatory argument, a gay man could think it's irrational for him to want to date men, because that doesn't lead to him having children. But that won't make him want to date women. I have lots of irrational desires that I nevertheless treasure.

  • Less importantly, I also feel like there's an assumption that we would want to create an AI that is only as good as we are, not better then we are. But if we can't even define our current values, then deciding what the superior values would be does sound like an even more impossible challenge. Having a superhuman AI that treated us the way we treat chickens would be pretty bad.
Comment by Dues on What can we learn from traditional societies? · 2022-05-12T02:27:58.557Z · LW · GW

This is a late reply, but in some societies, intentionally not killing your enemy is the point. See counting coup in the Americas. https://en.m.wikipedia.org/wiki/Counting_coup

If both sides are obeying a warrior code that lets you gain prestige but limits lethality, then you don't want to defect by escalating the lethality of attacks, because then your enemies might too. Warrior codes are often part of a prisoners dilemma.

Comment by Dues on Boring Advice Repository · 2017-09-14T03:49:02.054Z · LW · GW

Put a scarf or neck warmer over your mouth. Your throat will thank you. If that's too warm, you can use one of those little medical masks or chew a piece of gum.

Comment by Dues on Intrinsic properties and Eliezer's metaethics · 2017-09-11T03:34:59.600Z · LW · GW

P.S. I stand by everything I said, but I'm pretty sure I only finished the reply so I could make an Alienizer pun.

Comment by Dues on Intrinsic properties and Eliezer's metaethics · 2017-09-11T03:28:18.102Z · LW · GW

To explain my perspective, let me turn your example around by using a fictional alien species called humans. Instead of spending their childhoods contemplating the holy key, these humans would spend some of their childhoods being taught to recognize a simple shape with three sides, called a 'triangle'.

To them, when the blob forms a holy key shape, it would mean nothing, but if it formed a triangle they would recognize it immediately!

Your theory becomes simpler when you have a triangle, a key, a lock that accepts a triangle, and lock that accepts a holy key. The key and the triangle are the theories and therefore feel intrinsic. The locks (our brains) are the reality and therefore feel extrinsic.

We want to have a true theory of morality (A true holy key ;)). But the only tool we have to deduce the shape of the key is our own moral intuitions. Alieneizer's theory of meta ethics, is that you need to look at the evidence until you have a theory that is satisfying on reflection and at no point require you to eat babies or anything like that. Some people find the bottom up approach to ethics unsatisfying, but we've been trying the top down approach of: propose a morality system, then see if it works perfectly in the real world for a long time without success.

I think this should satisfy your intuitions, like how our brains seem to accept a morality key because it is a lock, morality not changing even if your mind changes (because your mind will fit a new key), how our morality was shaped by evolution but also somehow both abstract and real, and why I think that calling Alieneizer a relativist is silly. He made it his job to figure out if morality is triangle shaped, key shaped, or 4th dimensional croissant shaped so he could make an AI with that shaped ethics.

Comment by Dues on By Which It May Be Judged · 2015-11-12T04:13:37.491Z · LW · GW

Humans value some things more than others. Survival is the bedrock human value (yourself, your family, your children, your species). Followed by things like pleasure and the lives of others and the lives of animals. Every human weighs the things a little differently, and we're all bad at the math. But on average most humans weigh the important things about the same. There is a reason Elizer is able to keep going back to the example of saving a child.

Comment by Dues on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-07T01:21:41.643Z · LW · GW

Most acne medications work by drying out your skin, not by being antibacterial out anti viral. The infections are a symptom of greasy skin and I think they claim that skin products less grease when it has the right bacteria is on it. But that still leaves me only 50% confident that it would work under optimal conditions.

Comment by Dues on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-05T03:07:34.656Z · LW · GW

Their ads say that AO spray makes your sweat smell less bad and it helps clear up acne. I've had zits since I hit puberty and a product that cuts down on the amount of caustic chemicals I need to rub all over my body would be great. I also commute to work by bicycl I'm 100+ degree Fahrenheit weather, and my office has no shower, so if AO actually cuts my BO then it might be a good investment.

Comment by Dues on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-02T02:09:00.475Z · LW · GW

I'm not sure if I'm reading you right, but if I am, you are saying that it takes a long time to kick in. Therefore I need to give up bathing while I wait for it to kick in. Therefore I would need to give up swimming because it's not useful if I want to replace the good bacteria lost from swimming in a chlorinated pool and then showering off the chlorine.

I did read one article (after I posted) where the reporter skipped showering for a month then took one shower and washed it all away (according to the bacterial swabs he took).

Comment by Dues on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-10-31T03:24:23.085Z · LW · GW

Does it work as advertised? Does it kind of work but only a little bit? I'd it basically a really expensive placebo? These are the kind of questions I would want answers to. I doubt anyone here would actually know about this product specifically, but maybe someone knows of a site like crazymeds.com for health stuff.

Comment by Dues on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-10-30T02:20:31.236Z · LW · GW

I know a lot of less wrongers are big fans of nootropics and y'all could probably recommend some forums to ask questions about the effectiveness of strange drugs. Did anyone know of forums for strange health products? I was thinking of trying AO+ body spray, but at $50 for a month's supply I want to know if it is effective before I buy it. AO body spray is a new product by an MIT startup that is supposed to replace the good bacteria on your skin that bathing with soap removes. These bacteria are supposed to be to break down your sweat to make you smell better and have healthier skin. ttps://shop.motherdirt.com/product/ao-mist-2 If this stuff works, I think that lends credence to the historical viewpoint that bathing is unhealthy. But right now we only have a few case studies and no controller trials.

Comment by Dues on 3 Levels of Rationality Verification · 2015-10-30T01:22:55.757Z · LW · GW

I suspect you are right. But still, lying and tricking people is a skill, and I know where I can learn to practice it. (Debate clubs) Are the courses for the skill of detecting lies and tricks? All I can think of offhand is those fbi courses on micro expressions and maybe playing lots of poker. It feels like they off a currently unfilled market for defensive techniques.

Comment by Dues on Religion's Claim to be Non-Disprovable · 2015-05-25T02:34:12.090Z · LW · GW

I've heard the story of Elijah and the Priests of Baal being told as one of the first experimental swindles, rather than the first experiment. It goes something like this: Elijah: Pours 'water' on his pyre. Pyre: Catches on fire Priest of Baal: "Wait was that water or oil? If I pour some of your 'water' on my pyre maybe it will light too..." Elijah: "Put them to death before they can invent repeatability testing."

The water being oil part is so obvious that it reads less like a 'God turned water into fire' story and more like a 'look how dumb those Baal worshipers were, we totally tricked them' story. I've heard it told both ways though.

Comment by Dues on [deleted post] 2015-03-21T18:18:39.756Z

Not really. Is there a lot to the book beyond the Wikipedia summary?

Comment by Dues on [deleted post] 2015-03-21T18:15:49.374Z

It's kind of a cross between a job site and a review site. In theory.

Comment by Dues on [deleted post] 2015-03-19T02:17:08.451Z

AI doesn't seem to be a single problem but a label for a broad field.

I don't really want to debate definitions. But that is exactly why I want the sorter to break down 'big problems' like AI into 'little problems' like neural networks, search, etc.

How do you know that business thought it was very important?

Because people keep spending money on marginal user interface improvements that have added up to big differences in user interfaces. The easier the interface is to use, the more people will be able to use it, the more people will buy it.

[Here is a guide to graphic interfaces over time]. (http://toastytech.com/guis/guitimeline.html) Start about 30 years ago when the Macintosh comes out.

Comment by Dues on [deleted post] 2015-03-18T05:17:46.911Z

I have no idea how much stuff didn't pan out, but we've been steadily chipping away at AI since the the 40's, and I can't imagine that AI was considered unimportant. We also made gigantic strides on user friendless/interfaces. I'm not sure if academia thought it was important but consumers and businesses thought it was.

Comment by Dues on [deleted post] 2015-03-17T03:19:41.530Z

Good advice. Since I wanted a lot of things to be weighted when determining the search order, I considered just hiding all the complexity 'under the hood'. But if people don't know what they are voting on they might be less inclined to vote at all.

Comment by Dues on [deleted post] 2015-03-17T02:53:44.652Z

haha. Yeah, later, on reflection I understood. I promise to not only show the 'most important' problems. The marginal utility of working on a problem is higher when no one else is doing it. But if there are neglected important problems then I want to find them.

Comment by Dues on [deleted post] 2015-03-16T02:51:25.722Z

That's a good point. I shouldn't just list skills by the goal of all the similar projects but also by the individual projects. If one Linux distribution is way easier to contribute to than the others, users should know that.

Comment by Dues on [deleted post] 2015-03-16T02:46:00.384Z

I totally agree. But in the job market, I have search tools to find the best job close to where I live, within my skills, and in my salary range to maximize my comparative advantage. And don't even get me started on all the tools and advice you can get for the stock market. But there is currently no tool for maximizing the comparative advantage of volunteer work. The good news for me is that there are a lot of similar tools to what I want to do, so I don't have to be terribly creative.

You did give me an idea. Let me edit my post.

Comment by Dues on Pratchett, Rationality, and Winning · 2015-03-15T00:51:07.784Z · LW · GW

If someone came to lesswrong and asked: "I'm an average student, I don't know what to do with my life. What should I do?" Then I would probably recommend studying hard, getting a good job, and trying to figure out what they enjoyed/were good at so they could specialize. Good general advice if I don't know about the person.

On the other hand, if Young Pratchett had asked the question: "I'm a bad student, but I love writing and I'm obsessed with the news. What should I do?" I would probably recommend concentrating on his writing classes and getting a job that involved the news and writing, like the newspaper job he got. Advice tailored to the person.

You don't win by competing with people who are better than you at something you are bad at. You win by finding what is important to you, what you enjoy, and what you are best at and doing that as well as you can. Giving the same advice to everyone seems like the way to lose at giving good advice.

Comment by Dues on Stupid Questions March 2015 · 2015-03-15T00:25:57.816Z · LW · GW

Is there a reason why we have trouble defining counterfactuals? Does this only apply to defining counterfactuals mathematically?

Intuitively a counterfactual/hypothetical situation seems like a simulation to me. But I've heard a couple times on the site that we don't know how to define counterfactuals in AI, so I feel like I must be missing something.

Comment by Dues on Stupid Questions March 2015 · 2015-03-14T23:48:20.072Z · LW · GW

I'm going to second the thing about about acne and add a recommendation that if you have skin problems, see a dermatologist. They might be able to fix your problem and then you won't need acne makeup.

Comment by Dues on Useful Questions Repository · 2015-03-05T05:28:33.111Z · LW · GW

This reminds me of of the times when I have to compile reports for users from our database. I started requiring that everyone gives me a reason why they want the reports. Most of the users aren't technical people so half the time I need to give them exactly what they asked for and half the time I need to give them something completely different. I've started preemptively adding the reason why I want something into my questions, and I have stopped bothering to guess why people want something. Now I go straight to asking questions.

Communication is hard.

Comment by Dues on The Self-Reinforcing Binary · 2014-09-28T20:53:20.826Z · LW · GW

"Even slavery?" Seems like an amusing comeback until you put it into the context of the societies where it originated. In the ancient world, food was often very scarce. If you went to war with a group of people and you kept them as prisoners, they would starve to death because 9 out of 10 people were involved in food production.

It's easy to say that slavery was a bad tradition now that we have a tradition that says 'slavery is always bad and evil', but let's say you found yourself in a hypothetical post apocalypse. If you were actually making a choice between slaughtering a rival band of survivors and putting them to work (basically slavery), are you sure that you wouldn't start a slavery tradition?

Comment by Dues on Changing Emotions · 2014-08-15T01:30:20.781Z · LW · GW

Elizer, it seems crazy to me to think that we would need a second brain in order to not throw up. Couldn't we just take advantage of the cognitive dissonance architecture built into our brains? I have personal anecdotal evidence where intellectually I remember having disgust reactions that I no longer experience. (You can thank the Internet for most of that). I agree that the arrangement of brains are different between men and women but saying that you need a new seperate brain in order to avoid disgust reactions seems like protesting too much.

Comment by Dues on Tell Your Rationalist Origin Story · 2014-08-05T05:02:36.856Z · LW · GW

When I was a child, my parents took me to church a few times. My brother and I always pitched a fit, so eventually our parents gave up. I would love to say that was the start of my journey and that we did it because the things they tried to teach us didn't make enough sense, but that would be a lie. The real sin that the local church made was to be super boring. So with my sanity waterline firmly unraised, I started my own religion. It had aliens, because aliens were cool. I even got a convert. (You are now free to laugh at middle school me.)

Eventually my friend decided that he didn't want to play the game anymore. (This also included an awkward conversation where he asked if I actually believed what we were talking about.) I remember holding firm to my beliefs because admitting that I was wrong would be embarrassing. This was my first taste of my brain really going crazy and rationalizing 'dangerous thoughts' away. My first steps happened when my brain finally calmed down and let rationality take hold. I realizing that I never wanted to do something like that again and I needed to watch my thoughts.

(I also learned that being a cult leader is super fun. If you ever need priest for the Bayesian Conspiracy I will be there with a funny robe on.)

Comment by Dues on Ask and Guess · 2014-08-02T15:23:25.047Z · LW · GW

Having different levels of ask cultures makes so much sense to me now that I've heard about it. It explains why I felt creeped out the first couple of time I heard a woman say, "There's nothing less sexy that a man who asks to kiss you."

If rationalists should win, then we should have a secret signal that lets others know whether we want to be asked or guessed at. So the older I get the more I want some of Yvain's Rakiovik status beads to tell people, "Ask me anything, please criticize me, and don't worry about offending me." On the Internet, maybe those 'how to treat me' labels go in my profile?

Comment by Dues on How do you notice when you're rationalizing? · 2014-07-14T04:40:24.428Z · LW · GW

Whenever I start to get angry and defensive, that's a sign that I'm probably rationalizing.

If I notice, I try to remind myself that humans have a hard time changing their minds when angry. Then I try to take myself out of the situation and calm down. Only then do I try to start gathering evidence to see if I was right or wrong.

My source on 'anger makes changing your mind harder' was 'How to Win Friends and Influence People'. I have not been able to find a psychology experiment to back me up on that, but it has seemed to work out for me in real life. It suggests that, if you think someone else is rationalizing, then the first step to changing their mind is to get them to be calm. It also seems to suggest that calming yourself down and distancing yourself from whatever generated the rationalization (a fight, a peer group, your parents, etc.) is what you need to do to work through a possible rationalization.

Comment by Dues on SotW: Be Specific · 2014-07-12T03:36:07.129Z · LW · GW

That depends on the scoring system. If the judge grade exponentially for better answers, then small increments are a loosing choice.

Comment by Dues on Simultaneously Right and Wrong · 2014-07-08T04:58:57.363Z · LW · GW

I wondered if this bias is really a manifestation of holding two contradictory ideas (a la belief in belief in belief). I wonder because, when past me was making this exact mistake, I notice that it tended to be a case of having a wide range of possible skill rank coupled with a low desire for accuracy.

If I think that my IQ is between 100 and 80 then I can have it both ways. I don't know that for sure, so I can brag: "Oh my IQ is somewhere below 100." because there is still a chance that their IQ is 100. However, if I am bout to be presented with an IQ test, I am tempted to be humble and say 80, because the test is probably going to prove me wrong in a positive way. That way I get to seem humble and smart, rather than overconfident and dumb.

Why are we surprised that the subjects were still trying to act is high status ways when they weren't being watched? This isn't like an experiment where I'm more likely to steal a candy bar if I'm anonymous. My reward for acting high status when no one is watching, is that I get to think of myself as a high status actor even when other people aren't watching. I always have an audience of at least one person. myself.

Comment by Dues on 3 Levels of Rationality Verification · 2014-06-09T03:51:04.724Z · LW · GW

If rhetoric is the dark arts, then rationalists need a defense against the dark arts.

I've always seem debates as a missed opportunity for rationality training/testing. Not for debaters, but for the audience.

When you have two people cleverly arguing for an answer, that is an opportunity for the audience to see if they can avoid being suckered in. To keep things interesting, you could randomize the debate so that one, bother, or neither debater is telling the truth. (Or course in the toughest debates, the debaters are both partially true and the audience needs to find out what is the real answer.) And if we want to keep the students from compartmentalizing what they have learned, we probably need to make the debates a mix of real world and abstract debates. We might also have easy, medium, and hard difficulty debates, but you don't tell the audience beforehand.

I think that this would be useful thing, because lots of place already have debate clubs and public debates. All we would need to have an audience game running in the background.

I think that the most helpful part of the lesson would come after the debate and after the audience has been scored on their confidence intervals. If we can get the debaters to explain the rhetorical tricks they used, so the audience can recognize them in the future and hopefully not fall for them a second time.