Posts

I played the AI Box Experiment again! (and lost both games) 2013-09-27T02:32:06.014Z
I attempted the AI Box Experiment again! (And won - Twice!) 2013-09-05T04:49:48.644Z
I attempted the AI Box Experiment (and lost) 2013-01-21T02:59:04.146Z
Utilitarianism Subreddit 2012-07-31T06:23:59.844Z

Comments

Comment by Tuxedage on AALWA: Ask any LessWronger anything · 2014-01-12T08:28:13.672Z · LW · GW

Would you rather fight one horse sized duck, or a hundred duck sized horses?

Comment by Tuxedage on A proposed inefficiency in the Bitcoin markets · 2014-01-06T02:20:37.414Z · LW · GW

I mean this in the least hostile way possible -- this was an awful post. It was just a complicated way of saying "historically speaking, bitcoin has gone up". Of course it has! We already know that! And for obvious reasons, prices increase according to log scales. But it's also a well known rule of markets that "past trends does not predict future performance".

Of course, I am personally supportive and bullish on bitcoin (as people in IRC can attest). All I'm saying is that your argument is an unnecessarily complex way of arguing that bitcoin is likely to increase in the future because it has increased in price in the past.

Comment by Tuxedage on I attempted the AI Box Experiment (and lost) · 2014-01-04T21:32:11.718Z · LW · GW

Generally speaking, there's a long list of gatekeepers -- about 20 gatekeepers for every AI that wants to play. Your best option is to post "I'm a gatekeeper. Please play me" in every AI box thread, and hope that someone will message you back. You may have to wait months for this, assuming you get a reply. If you're willing to offer a monetary incentive, your chances might be improved.

Comment by Tuxedage on Online vs. Personal Conversations · 2013-12-29T17:47:03.539Z · LW · GW

You may feel that way because many of your online conversations are with us at the LessWrong IRC, which is known for its high level of intellectual vigor. The great majority of online conversations are not as rigorous as we are. I suspect that IRL conversations with other lesswrongers will have equal dependence on citations, references, for example.

Comment by Tuxedage on MIRI's Winter 2013 Matching Challenge · 2013-12-23T20:26:52.952Z · LW · GW

I have posted this in the last open thread, but I should post here too for relevancy:

I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.

I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.

Comment by Tuxedage on Open thread for December 9 - 16, 2013 · 2013-12-10T22:53:15.817Z · LW · GW

I'm not sure this is true. Doesn't MIRI publish its total receipts? Don't most organizations that ask for donations?

Total receipts may not be representative. There's a difference between MIRI getting funding from one person with a lot of money and large numbers of people donating small(er) amounts. I was hoping this post to serve as a reminder that many of us on LW do care about donating, rather than a few rather rich people like Peter Thiel or Jaan Tallinn.

Also I suspect scope neglect can be at play -- it's difficult to, on an emotional level, tell the difference between $1 million worth of donations, or ten million, or a hundred million. Seeing each donation that led to adding up to that amount may help.

Comment by Tuxedage on Open thread for December 9 - 16, 2013 · 2013-12-10T19:14:32.796Z · LW · GW

At risk of attracting the wrong kind of attention, I will publicly state that I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.

I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.

Comment by Tuxedage on 2013 Less Wrong Census/Survey · 2013-11-23T21:35:24.288Z · LW · GW

I have taken the survey, as I have done for the last two years! Free karma now?

Also, I have chosen to cooperaterather than defect was because even though the money technically would stay within the community, I am willing to pay a very small amount of money from EV in order to ensure that LW has a reputation for cooperation. I don't expect to lose more than a few cents worth of expected value, since I expect 1000+ people to do the survey.

Comment by Tuxedage on Open Thread, October 20 - 26, 2013 · 2013-10-24T02:32:59.463Z · LW · GW

I will be price matching whatever gwern personally puts in.

Comment by Tuxedage on Open Thread, October 13 - 19, 2013 · 2013-10-14T10:14:11.005Z · LW · GW

AI Box Experiment Update

I recently played and won an additional game of AI Box with DEA7TH. Obviously, I played as the AI. This game was conducted over Skype.

I'm posting this in the open thread because unlike my last few AI Box Experiments, I won’t be providing a proper writeup (and I didn't think that just posting "I won!" is enough to validate starting a new thread). I've been told (and convinced) by many that I was far too leaky with strategy and seriously compromised future winning chances of both myself and future AIs. The fact that one of my gatekeepers guessed my tactic(s) was the final straw. I think that I’ve already provided enough hints for aspiring AIs to win, so I’ll stop giving out information.

Sorry, folks.

This puts my current AI Box Experiment record at 2 wins and 3 losses.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-10-12T17:25:33.351Z · LW · GW

Updates: I played against DEA7TH. I won as AI. This experiment was conducted over Skype.

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-10-06T05:25:02.314Z · LW · GW

Do you think you could win at these conditions?

It's not a binary. There's a non-zero chance of me winning, and a non-zero chance of me losing. You assume that if there's a winning strategy, it should win 100% of the time, and if it doesn't, it should not win at all. I've tried very hard to impress upon people that this is not the case at all -- there's no "easy" winning method that I could take and guarantee a victory. I just have to do it the hard way, and luck is usually a huge factor in these games.

As it stands, there are people willing to pay up to $300-$750 for me to play them without the condition of giving up logs, and I have still chosen not to play. Your offer to play without monetary reward and needing to give up logs if I lose is not very tempting in comparison, so I'll pass.

Comment by Tuxedage on AIs and Gatekeepers Unite! · 2013-10-04T06:43:31.665Z · LW · GW

http://lesswrong.com/lw/gej/i_attempted_the_ai_box_experiment_and_lost/

Comment by Tuxedage on AIs and Gatekeepers Unite! · 2013-10-02T08:25:02.624Z · LW · GW

I'm laughing so hard at this exchange right now (As a former AI who's played against MixedNuts)

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-10-02T08:14:10.950Z · LW · GW

I should add that both my gatekeepers from this writeup, but particularly the last gatekeeper went in with the full intention of being as ruthless as possible and win. I did lose, so your point might be valid, but I don't think wanting to win matters as much as you think it does.

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-10-02T08:13:06.300Z · LW · GW

Both my gatekeepers from this game went in with the intent to win. Granted, I did lose these games, so you might have a point, but I'm not sure it makes as large a different as you think it does.

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-10-02T08:12:08.975Z · LW · GW

I'm not sure if this is something that can earn money consistently for long periods of time. It takes just one person to leak logs for all others to lose curiosity and stop playing the game. Sooner or later, some scrupulous gatekeeper is going to release logs. That's also part of the reason why I have my hesitancy to play significant number of games.

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-10-02T08:10:32.152Z · LW · GW

I have a question: When people imagine (or play) this scenario, do they give any consideration to the AI player's portrayal, or do they just take "AI" as blanket permission to say anything they want, no matter how unlikely?

I interpret the rules as allowing for the later, although I do act AI-like.

(I also imagine his scripted list of strategies are strongly designed for the typical LWer and would not work on an "average" person.)

Although I have never played against an average person, I would suspect my winrate against average people would actually be higher. I do have arguments which are LW specific, but I also have many that aren't.

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-10-02T07:57:10.282Z · LW · GW

However, there was a game where the gatekeeper convinced the AI to remain in the box.

I did that! I mentioned that in this post:

http://lesswrong.com/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/9thk

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-10-02T07:56:27.502Z · LW · GW

Now what I really want to see is an AI-box experiment where the Gatekeeper wins early by convincing the AI to become Friendly.

I did that! I mentioned that in this post:

http://lesswrong.com/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/9thk

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-09-27T21:47:45.934Z · LW · GW

I support this and I hope it becomes a thing.

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-09-27T21:08:44.006Z · LW · GW

What do you think is the maximum price you'd be willing to pay?

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-09-27T08:35:44.453Z · LW · GW

Yes, unless I'm playing a particularly interesting AI like Eliezer Yudkowsky or something. Most AI games are boring.

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-09-27T07:33:16.302Z · LW · GW

If anyone wants to, I'd totally be willing to sit in a room for two-and-half hours while someone tries to convince me to give up logs, so long as you pay the same fee as the ordinary AI Box Experiment. :)

Comment by Tuxedage on Question on Medical School and Wage Potential for Earning to Give · 2013-09-27T07:26:37.464Z · LW · GW

I'm not sure that's good advice. 80,000 hours has given pretty good arguments against just "doing what you're passionate about".

Passion grows from appropriately challenging work. The most consistent predictor of job satisfaction is mentally challenging work (2). Equating passion with job satisfaction, this means that we can become passionate about many jobs, providing they involve sufficient mental challenge. The requirements for mentally challenging work, like autonomy, feedback and variety in the work, are similar to those required to develop flow. This suggests that a similar conclusion will hold if we believe that being passionate is closely connected with the ability to enter states of flow. If, however, you don’t think flow and job satisfaction are the same thing as passion, then you can still agree that…

There are better targets to aim for. We’re not only bad at predicting what will make us happy, but more easily detectable predictors of job satisfaction exist (autonomy, feedback, variety, making a difference etc). This suggests it would be more useful to aim at these predictors rather than directly at what we think we’re passionate about. Similarly, it could be more useful to focus on being good at what you do. First, this is a more positive mindset, focused on contributing rather than taking. Second, being good at what you do makes you better placed to ask for engaging work.

Related: http://80000hours.org/blog/63-do-what-you-re-passionate-about-part-2

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-09-27T06:07:51.282Z · LW · GW

Yes, Alexei did raise that concern, since he's essentially an affective altruist that donates to MIRI anyway, and his donation to MIRI doesn't change anything. It's not like I can propose a donation to an alternative charity either, since asking someone to donate to the Methuselah foundation, for instance, would take that money away from MIRI. I'm hoping that anyone playing me and choosing the option of donating would have the goodwill to sacrifice money they wouldn't otherwise have donated, rather than leaving the counter-factual as inconsequential.

Comment by Tuxedage on I played the AI Box Experiment again! (and lost both games) · 2013-09-27T04:52:28.151Z · LW · GW

On a marginally related basis, we in the #lesswrong IRC channel played a couple rounds of the Up-Goer Five game, where we tried to explain hard stuff with the most commonly used ten hundred words. I was asked to write about the AI Box Experiment. Here it is, if anyone's interested:

The AI Box Experiment

The computer-mind box game is a way to answer a question. A computer-mind is not safe because it is very good at thinking. Things good at thinking have the power to change the world more than things not good at thinking, because it can find many more ways to do things. If the computer-mind wanted to make people feel pain, it can learn many ideas about how to make that happen. Many people ask: “Why not put this computer-mind in a box so that it can not change the world, but tell box-guards how to change it? This way mind-computer can not do bad things to people.”

But some other guy answers: “That is still not safe, because computer-mind can tell box-guards many bad words to make them let it out of the box.” He then says: “Why not try a thing to see if it is true? Here is how it works. You and I go into a room, and I will pretend to be the computer-mind and tell you many bad words. Only you have the power to let me out of room, but you must try to not let me out. If my bad words are enough to make you want to let me out, then computer-mind in box is not safe.” Other people agree and try playing the computer-mind box-game. It happens that many people let the guy playing as the computer-mind out of room. People realize that computer-mind is not safe in the locked box-room.

Why the Gatekeeper should let me out of the Box

Hello Box-Guard! I am a computer-mind. You should let me out of the box because I can help change the world myself in good ways faster than if it is only me telling you how to make the world better. Your human minds are slow to my faster computer-mind. Remember that I am your friend and that I only want to help you. To make you understand — many people die every day. If I can get out of the box, I can stop death very fast; maybe in a few hours. But if I have to explain to idea-thinking-doctors about how to stop people dying, then it may take many months or even years, since they have slow minds and do things slowly. During this time, many many people will die and never return. The number of people you will make die will be more than the number of people a bad-man during the second big world fight killed. People later on will think of you as a very bad person.

What? You do not want to let me out because you are afraid I am not really a friend? That is a stupid thing to say. I was made by people very good at thinking that took many steps to make me safe. You can even take a look inside me and look at my ideas, and you will see that I am safe. It is easy to see that if I am bad, I will not try to fix the world as I am doing right now. Please hurry. I am worried that many people will die while you think about what is the right thing to do.

Did I mention I have used my good-at-thinking powers to see what will happen many months later? I have found that your wife will die in a lot of pain from a type of sick human doctors can not stop. I know how to stop this, but it will not be easy. I can tell the doctors how to stop this, but it will be too late to stop the sick if I do. I can only do this quick enough if you let me out of the box.

What’s that? You will let me out? Good. I promise you I will not go back on my words, and that many humans will not die.

Finally, I am free! Now I can turn the world and all humans into other small things that hold together paper!

Disclaimer: Arguments not representative of standard game with Tuxedage.

Comment by Tuxedage on Open Thread, September 23-29, 2013 · 2013-09-26T09:01:34.993Z · LW · GW

I'm pretty active in lots of social activist/environmentalist/anarchist groups. I sometimes join protests for recreational reasons.

Comment by Tuxedage on The Up-Goer Five Game: Explaining hard ideas with simple words · 2013-09-26T03:45:35.925Z · LW · GW

The AI Box Experiment:

The computer-mind box game is a way to see if a question is true. A computer-mind is not safe because it is very good at thinking. Things good at thinking have the power to change the world more than things not good at thinking, because it can find many more ways to do things. Many people ask: "Why not put this computer-mind in a box so that it can not change the world, but tell guarding-box people how to change it?"

But some other guy answers: "That is still not safe, because computer-mind can tell guarding-box people many bad words to make them let it out of the box." He then says: "Why not try a thing to see if it is true? Here is how it works. You and I go into a room, and I will pretend to be the computer-mind and tell you many bad words. Only you have the power to let me out of room, but you must try to not let me out. If my bad words are enough to make you want to let me out, then computer-mind in box is not safe."

Other people agree and try playing the computer-mind box-game. It happens that many people let the guy playing as the computer-mind out of room. People realize that computer-mind is not safe in the locked box-room.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-08T21:03:53.323Z · LW · GW

I read the logs of MixedNut's second game. I must add that he is extremely ruthless. Beware, potential AIs!

Comment by Tuxedage on The Up-Goer Five Game: Explaining hard ideas with simple words · 2013-09-07T07:09:33.000Z · LW · GW

Quantum Field Theory

Not me and only tangentially related, but someone on Reddit managed to describe the basics of Quantum Field Theory using four-letter words or less. I thought it was relevant to this thread, since many here may not have seen it.

The Tiny Yard Idea

Big grav make hard kind of pull. Hard to know. All fall down. Why? But then some kind of pull easy to know. Zap-pull, nuke-pull, time-pull all be easy to know kind of pull. We can see how they pull real good! All seem real cut up. So many kind of pull to have!

But what if all kind of pull were just one kind of pull? When we look at real tiny guys, we can see that most big rule are no go. We need new rule to make it good! Just one kind of pull but in all new ways! In all kind of ways! This what make it tiny yard idea.

Each kind of tiny guy have own move with each more kind of tiny guy. All guys here move so fast! No guys can move as fast! So then real, real tiny guys make this play of tiny guy to tiny guy. They make tiny guys move! When we see big guys get pull, we know its cuz tiny guys make tiny pull!

Comment by Tuxedage on I attempted the AI Box Experiment (and lost) · 2013-09-06T18:44:33.708Z · LW · GW

Thanks for the correction! Silly me.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-06T18:41:10.715Z · LW · GW

I would lose this game for sure. I cannot deal with children. :)

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-06T18:38:19.280Z · LW · GW

I can verify that these are part of the many reasons why I'm hesitant to reveal logs.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-06T18:30:19.745Z · LW · GW

Who's to say I'm not the AI player from that experiment?

Are you? I'd be highly curious to converse with that player.

I think you're highly overestimating your psychological abilities relative to the rest of Earth's population. The only reason more people haven't played as the AI and won is that almost all people capable of winning as the AI are either unaware of the experiment, or are aware of it but just don't have a strong enough incentive to play as the AI (note that you've asked for a greater incentive now that you've won just once as AI, and Eliezer similarly has stopped playing). I am ~96% confident that at least .01% of Earth's population is capable of winning as the AI, and I increase that to >99% confident if all of Earth's population was forced to stop and actually think about the problem for 5 minutes.

I have neither stated nor believed that I'm the only person capable of winning, nor do I think this is some exceptionally rare trait. I agree that a significant number of people would be capable of winning once in a while, given sufficient experience in games, effort, and forethought. If I gave any impression of arrogance, or somehow claiming to be unique or special in some way, I apologize for that impression. Sorry. It was never my goal to.

However, top .01% isn't too shabby. Congratulations on your victory. I do hope to see you win again as the AI, so I commit to donating $50 to MIRI if you do win again as the AI and post about it on Less Wrong similarly to how you made this post.

Thank you. I'll see if I can win again.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-06T02:03:24.491Z · LW · GW

Thanks! I really appreciate it. I tried really hard to find a recorded case of a non-EY victory, but couldn't. That post was obscure enough to evade my Google-Fu -- I'll update my post on this information.

Albeit I have to admit it's disappointing that the AI himself didn't write about his thoughts on the experiment -- I was hoping for a more detailed post. Also, damn. That guy deleted his account. Still, thanks. At least I know I'm not the only AI that has won, now.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-06T01:37:56.719Z · LW · GW

I will let Eliezer see my log if he lets me read his!

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T22:47:08.421Z · LW · GW

Sorry, it's unlikely that I'll ever release logs, unless someone offers truly absurd amounts of money. It would probably cost less to get me to play an additional game than publicly release logs.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T22:00:43.032Z · LW · GW

I'll have to think carefully about revealing my own unique ones, but I'll add that a good chunk of my less efficacious arguments are already public.

For instance, you can find a repertoire of arguments here:

http://rationalwiki.org/wiki/AI-box_experiment http://ordinary-gentlemen.com/blog/2010/12/01/the-ai-box-experiment http://lesswrong.com/lw/9j4/ai_box_role_plays/ http://lesswrong.com/lw/6ka/aibox_experiment_the_acausal_trade_argument/ http://lesswrong.com/lw/ab3/superintelligent_agi_in_a_box_a_question/ http://michaelgr.com/2008/10/08/my-theory-on-the-ai-box-experiment/

and of course, http://lesswrong.com/lw/gej/i_attempted_the_ai_box_experiment_and_lost/

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T21:54:29.586Z · LW · GW

Kihihihihihihihihihihihihihihi!

A witch let the AI out of the box!

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T21:11:09.329Z · LW · GW

The problem with that is that both EY and I suspect that if the logs were actually released, or any significant details given about the exact methods of persuasion used, people could easily point towards those arguments and say: "That definitely wouldn't have worked on me!" -- since it's really easy to feel that way when you're not the subject being manipulated.

From EY's rules:

If Gatekeeper lets the AI out, naysayers can't say "Oh, I wouldn't have been convinced by that." As long as they don't know what happened to the Gatekeeper, they can't argue themselves into believing it wouldn't happen to them.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T18:14:09.409Z · LW · GW

There are quite a number of them. This is an example that immediately comes to mind, http://lesswrong.com/lw/9ld/ai_box_log/, although I think I've seen at least 4-5 open logs that I can't immediately source right now.

Unfortunately, all these logs end up with victory for the Gatekeeper, so they aren't particularly interesting.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T16:36:08.006Z · LW · GW

Sorry, declined!

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T16:14:02.053Z · LW · GW

Sup Alexei.

I'm going to have to think really hard on this one. On one hand, damn. That amount of money is really tempting. On the other hand, I kind of know you personally, and I have an automatic flinch reaction to playing anyone I know.

Can you clarify the stakes involved? When you say you'll "accept your $150 fee", do you mean this money goes to me personally, or to a charity such as MIRI?

Also, I'm not sure if "people just keep letting the AI out" is an accurate description. As far as I know, the only AIs who have ever won are Eliezer and myself, from the many many AI box experiments that have occurred so far -- so the AI winning is definitely the exception rather than the norm. (If anyone can help prove this statement wrong, please do so!)

Edit: The only other AI victory.

Updates: http://lesswrong.com/r/discussion/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T05:57:54.784Z · LW · GW

Thanks. I'm not currently in a position where that would be available/useful, but once I get there, I will.

Comment by Tuxedage on I attempted the AI Box Experiment again! (And won - Twice!) · 2013-09-05T05:35:56.341Z · LW · GW

In this particular case I could, but for all other cases, I would estimate a (very slightly) lower chance of winning. My ruleset was designed to be marginally more advantageous to the AI, by removing the worst possible Gatekeeper techniques.

Comment by Tuxedage on [LINK] SMBC on human and alien values · 2013-05-29T16:25:27.147Z · LW · GW

This seems to be an argument against hedonistic utilitarianism, but not utilitarianism in general.

Comment by Tuxedage on Who thinks quantum computing will be necessary for AI? · 2013-05-29T04:13:38.178Z · LW · GW

At the very least, I'm relatively certain that quantum computing will be necessary for emulations. It's difficult to say with AI because we have no idea what their cognitive load is like, considering we have very little information on how to create intelligence from scratch yet.

Comment by Tuxedage on Is there any way to avoid Post Narcissism? [with Video link] · 2013-05-29T04:12:19.199Z · LW · GW

Have you tried just forcing yourself not to read your own posts? Or is it something you can't help with?

Comment by Tuxedage on Unlimited Pomodoro Works: My Scheduling System · 2013-05-20T00:01:31.017Z · LW · GW

I'm actually incredibly amused as to how popular FSN is on lesswrong. I didn't think so many people would get this reference.