Posts

Future of Humanity? 2011-05-24T21:46:53.234Z

Comments

Comment by RickJS on Paul's research agenda FAQ · 2023-03-27T03:20:25.049Z · LW · GW

That's a terrible focus on punishment. Read "Don't Shoot the Dog" by Karen Pryor and learn about behavior shaping through positive rewards.

Comment by RickJS on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-21T03:49:13.983Z · LW · GW

I agree that "think for yourself" is important. That includes updating on the words of the smart thinkers who read a lot of the relevant material. In which category I include Zvi, Eliezer, Nate Soares, Stuart Armstrong, Anders Sandberg, Stuart Russell, Rohin Shah, Paul Chistiano, and on and on.

Comment by RickJS on What Would You Like To Read? A Quick Poll · 2012-06-21T01:26:17.164Z · LW · GW

I will say that .PDF format is end-user hostile.

Comment by RickJS on Purchase Fuzzies and Utilons Separately · 2010-07-14T19:07:02.482Z · LW · GW

Thanks, Eliezer!

This one was actually news to me. Separately is more efficient, eh? Hmmm... now I get to rethink my actions.

I had deliberately terminated my donations to charities that seemed closer to "rescuing lost puppies". I had also given up personal volunteering (I figured out {work - earn - donate} before I heard it here.) And now I'm really struggling with akrasia / procrastination / laziness /rebellion / escapism.

"You could, of course, reply that you don't trust selfish acts that are supposed to be other-benefiting as an "ulterior motive" ". That's a poisonous meme that runs in my brain. But I consciously declare that to be nonsense. I don't ever want to discuss "pure altruism" ever again! I applaud ulterior motives, "Just so long as people get helped." If you can figure out your ulterior motives, use them! Put them in harness. You might as well, they aren't going away.

Comment by RickJS on So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning · 2010-07-14T18:23:56.293Z · LW · GW

Thanks, Matt!

That's a nice educational post.

I want to pick a nit, not with you, but with Gigerenzer and " ... the conjunction fallacy can be mitigated by changing the wording of the question ... " Unfortunately, in real life, the problems come at you the way they do, and you need to learn to deal with it.

I say that rational thinking looks like this: pencil applied to paper. Or a spreadsheet or other decision support program in use. We can't do this stuff in our heads. At least I can't. Evolution didn't deliver arithmetic, much less rationality. We teach arithmetic to kids, slowly and painstakingly. We had better start teaching them rationality. Slowly and painstakingly, not like a 1-hour also-mentioned.

And, since I have my spreadsheet program open, I will indeed convert probabilities into frequencies and look at the world both ways, so my automatic processors can participate. But, I only trust the answers on the screen. My brain lies to me too often.

Once again, thanks Matt. Well done!

Comment by RickJS on Money: The Unit of Caring · 2010-07-14T17:50:21.119Z · LW · GW

Thanks, Eliezer!

That's good stuff. I really relate to " ... the poisonous meme saying that someone who gives mere money must not care enough to get personally involved." That one runs on automatic in my head. It's just one of many ways my brain lies to me.

“Every time I spend money I feel like I'm losing hit points. ” Now, I don’t know your personal situation, and I can certainly relate. My mother is a child of the Great Depression and lived her life out of a fear of poverty. She taught me to worship Bargain and Sale and to abhor “unnecessary” spending.

I suspect that most people make it worse by not saving enough. I suspect most people have only a few months salary in savings. But in many highly-skilled (specialized) professions, it can take years to find your next career job.

Anyway, I made it a life practice, starting in college, to make saving a high priority, and then don’t look at my net wealth often. That was so I didn’t get stress-related diseases from worrying myself sick over money. Once I got “rich” (I retired at age 51), I got a financial planner, set up lots of disparate investments, and I look at my net worth only once a year.

Now it’s a different world, now that I have discovered existential risks. Now I fight with my financial planner to raise my outflow rate, and that goes to the Future of Humanity Institute, Dr. Martin Hellman (see nuclearrisk.org), and mostly to The Institute Which Must Not Be Named. My goal used to be to outlive my money. Now it is Saving Humanity from Homo Sapiens™.

I say that not to brag, but to invite you each to take on an extraordinary mission for your life. This optimal philanthropy thread gives a lot of the practical steps. In the Landmark Education Curriculum for Living™ you will create yourself as an extraordinary person, living your extraordinary commitment. Now that’s cool!

Comment by RickJS on Rationality: Common Interest of Many Causes · 2010-07-08T00:35:15.030Z · LW · GW

Thanks, Eliezer!

As one of your supporters, I have been sometimes concerned that you are doing blog posts instead of working out the Friendly AI theory. Much more concerned than I show. I do try to hold it down to an occasional straight question, and hold myself back from telling you what to do. The hypothesis that I know better than you is at least -50dB.

This post is yet another glimpse into the Grand Strategy behind the strategy, and helps me dispel the fear from my less-than-rational mind.

I find it unsettling that " ... after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists."

You learned that, “The human brain can't grasp large stakes and people are not anything remotely like expected utility maximizers, and we are generally altruistic akrasics.” Evolution didn’t deliver rationality any more than it delivered arithmetic. They have to be taught and executed as procedures. They aren’t natural. And I wonder if they can be impressed into System 1 through practice, to become semi-automatic. Right now, my rational side isn’t being successful at getting me to put in 8-hour work days to save humanity.

You learned that, "Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances ... " That makes some of MY screwy behavior start to make sense! It's much more explanatory than, "I'm cheap!" or lazy, or insensitive, or rebellious, or contrary. That looks to me like a major, practical breakthrough. I will take that to my coach and my therapist, we will use it if we can.

I don't think my psychologist ever said it. I doubt it is taught in undergraduate Psychology classes. Am I just out of touch ? Has this principle been put into school curricula? That you had to learn it the hard way, that it isn't just common knowledge about people, " ... paints a disturbing picture."

You’ve done it again. In a little over one thousand words, you have given me a conceptual tool that makes the world (and myself) more intelligible and perhaps a little more manageable. Perhaps even a LOT more manageable. We shall see, in real practice.

I would appreciate any links to information on the mental accounts and amounts of willpower.

Thank you, Eliezer.

--RickJS

Comment by RickJS on What is Evidence? · 2010-05-24T02:29:46.799Z · LW · GW

“If you don't believe that the outputs of your thought processes are entangled with reality, why do you believe the outputs of your thought processes? ”

I don’t. Well not like Believe. Some few of them I will give 40 or even 60 deciBels.

But I’m clear that my brain lies to me. Even my visual processor lies. (Have you ever been looking for your keys, looked right at them, and gone on looking?)

I hold my beliefs loosely. I’m coachable. Maybe even gullible. You can get me to believe some untruth, but I’ll let go of that easily when evidence appears.

Comment by RickJS on Why Truth? · 2010-05-24T01:24:32.374Z · LW · GW

Thanks, Eliezer!

“Are there motives for seeking truth besides curiosity and pragmatism?”

I can think of several that have showed up in my life. I’m offering these for consideration, but not claiming these are good or bad, pure or impure etc. Some will doubtless overlap somewhat with each other and the ones stated.

  1. As a weapon. Use it to win arguments (sometimes the point of an argument is to WIN, never mind learning the truth. I've got automatic competitiveness I need to keep on a short leash). Use it to win bar room bets. Acquire knowledge about the “buttons” people have, and use it to manipulate them. Use it to thwart opposition to my plans, however sleazy. (“What are we going to do tonight, Brain?” ... )
  2. As evidence that I deserve an A in school. Even if I never have a pragmatic use for the knowledge, there is (briefly) value in demonstrably having the knowledge.
  3. As culture. I don’t think I have ever found a practical use for the facts of history ( of science, of politics, or of art ), but they participated in shaping my whole world view. Out of that, I came out of retirement and dedicated myself to saving humanity. Go figure.
  4. As a contact, as in, “I know Nick Bostrom.” (OK, that’s a bit of a stretch, but it is partly informational.) 5, As pleasure & procreation, as in, “Cain knew his wife.” ;-)

“To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake.” Yes. I say, “Morality is for agents that can’t figure out the probable consequences of their actions.” Which includes me, of course. However, whenever I can make a good estimate, I pretty much become a consequentialist.

Seeking knowledge has, for me, an indirect but huge value. I say: Humanity needs help to survive this century, needs a LOT of help. I think Friendly AI is our best shot at getting it. And we’re missing pieces of knowledge. There may be whole fields of knowledge that we’re missing and we don’t know what they are.

I would not recommend avoiding lines of research that might enable making terribly powerful weapons. We’ve already got that problem, there’s no avoiding it. But there’s no telling what investigations will produce bits of information that will trigger some human mind into a century-class breakthrough that we had no idea we needed.

Comment by RickJS on Priors as Mathematical Objects · 2010-04-26T18:06:52.832Z · LW · GW

My reason for writing this is not to correct Eliezer. Rather, I want to expand on his distinction between prior information and prior probability. Pages 87-89 of Probability Theory: the Logic of Science by E. T. Jaynes (2004 reprint with corrections, ISBN 0 521 59271 2) is dense with important definitions and principles. The quotes below are from there, unless otherwise indicated.

Jaynes writes the fundamental law of inference as

  P(H|DX) = P(H|X) P(D|HX) / P(D|X)         (4.3)

Which the reader may be more used to seeing as

 P(H|D) = P(H) P(D|H) / P(D)

Where

 H = some hypothesis to be tested
 D = the data under immediate consideration
 X = all other information known

X is the misleadingly-named ‘prior information’, which represents all the information available other than the specific data D that we are considering at the moment. “This includes, at the very least, all it’s past experiences, from the time it left the factory to the time it received its current problem.” --Jaynes p.87, referring to a hypothetical problem-solving robot. It seems to me that in practice, X ends up being a representation of a subset of all prior experience, attempting to discard only what is irrelevant to the problem. In real human practice, that representation may be wrong and may need to be corrected.

“ ... to our robot, there is no such thing as an ‘absolute’ probability; all probabilities are necessarily conditional on X at the least.” “Any probability P(A|X) which is conditional on X alone is called a prior probability. But we caution that ‘prior’ ... does not necessarily mean ‘earlier in time’ ... the distinction is purely a logical one; any information beyond the immediate data D of the current problem is by definition ‘prior information’.”

“Indeed, the separation of the totality of the evidence into two components called ‘data’ and ‘prior information’ is an arbitrary choice made by us, only for our convenience in organizing a chain of inferences.” Please note his use of the word ‘evidence’.

Sampling theory, which is the basis of many treatments of probability, “ ... did not need to take any particular note of the prior information X, because all probabilities were conditional on H, and so we could suppose implicitly that the general verbal prior information defining the problem was included in H. This is the habit of notation that we have slipped into, which has obscured the unified nature of all inference.”

“From the start, it has seemed clear how one how one determines numerical values of of sampling probabilities¹ [e.g. P(D|H) ], but not what determines prior probabilities [AKA ‘priors’ e.g. P(H|X)]. In the present work we shall see that this s only an artifact of the unsymmetrical way of formulating problems, which left them ill-posed. One could see clearly how to assign sampling probabilities because the hypothesis H was stated very specifically; had the prior information X been specified equally well, it would have been equally clear how to assign prior probabilities.”

Jaynes never gives up on that X notation (though the letter may differ), he never drops it for convenience.

“When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify prior information before we have a well-posed problem, it becomes clear that ... exactly the same principles are needed to assign either sampling probabilities or prior probabilities ...” That is, P(H|X) should be calculated. Keep your copy of Kendall and Stuart handy.

I think priors should not be cheaply set from an opinion, whim, or wish. “ ... it would be a big mistake to think of X as standing for some hidden major premise, or some universally valid proposition about Nature.”

The prior information has impact beyond setting prior probabilities (priors). It informs the formulation of the hypotheses, of the model, and of “alternative hypotheses” that come to mind when the data seem to be showing something really strange. For example, data that seems to strongly support psychokinesis may cause a skeptic to bring up a hypothesis of fraud, whereas a career psychic researcher may not do so. (see Jaynes pp.122-125)

I say, be alert for misinformation, biases, and wishful thinking in your X. Discard everything that is not evidence.

I’m pretty sure the free version Probability Theory: The Logic of Science is off line. You can preview the book here: http://books.google.com/books?id=tTN4HuUNXjgC&printsec=frontcover&dq=Probability+Theory:+The+Logic+of+Science&cd=1#v=onepage&q&f=false .

Also see the Unofficial Errata and Commentary for E. T. Jaynes’s Probability Theory: The Logic of Science

SEE ALSO

FOOTNOTES

  1. There are massive compendiums of methods for sampling distributions, such as
    • Feller (An Introduction to Probability Theory and its Applications, Vol1, J. Wiley & Sons, New York, 3rd edn 1968 and Vol 2. J. Wiley & Sons, New York, 2nd edn 1971) and Kendall and
    • Stuart (The Advanced Theory of Statistics: Volume 1, Distribution Theory, McMillan, New York 1977).
      ** Be familiar with what is in them.

Edited 05/05/2010 to put in the actual references.

Edited 05/19/2010 to put in SEE ALSO

Comment by RickJS on Recommended Rationalist Reading · 2010-04-23T04:16:24.359Z · LW · GW

E.T. Jaynes, Probability Theory: The Logic of Science

and make sure you get the "unofficial errata"

Comment by RickJS on Let There Be Light · 2010-04-10T00:47:19.725Z · LW · GW

personality tests

Another test set is Gallup / Clifton StrengthsFinder 2.0 (http://www.strengthsfinder.com/113647/Homepage.aspx).

For me, the results were far more useful than the various "personality profiles" I have taken , sometimes at considerable cost to my employer.

"The CSF is an online measure of personal talent that identifies areas where an individual’s greatest potential for building strengths exists. ... The primary application of the CSF is as an evaluation that initiates a strengths-based development process in work and academic settings. As an omnibus assessment based on positive psychology, its main application has been in the work domain, but it has been used for understanding individuals in a variety of settings — employees, executive teams, students, families, and personal development. ... Given that CSF feedback is provided to foster intrapersonal development, comparisons across profiles of individuals are discouraged."

"When educational psychologist Donald O. Clifton first designed the interviews that subsequently became the basis for the CSF, he began by asking, “What would happen if we studied what is right with people?”Thus emerged a philosophy of using talents as the basis for consistent achievement of excellence (strength). Specifically, the strengths philosophy is the assertion that individuals are able to gain far more when they expend effort to build on their greatest talents than when they spend a comparable amount of effort to remediate their weaknesses (Clifton & Harter, 2003)."

The above two paragraphs are from Gallup's research report, available at http://strengths.gallup.com/110389/Research-Behind-StrengthsFinder-20.aspx . (Suggestion: download the file and open it in Adobe Reader. I've had trouble reading it inside Firefox.)

There is a small financial cost for the test: buy the book ($13 at Amazon http://www.amazon.com/dp/159562015X/ref=nosim/?tag=thegalluporganiz) to get access to the test and support tools.

My strengths are: Intellection, Analytical, Input, Restorative, Learner

Comment by RickJS on Ingredients of Timeless Decision Theory · 2009-09-25T03:37:46.572Z · LW · GW

Yes, I read about " ... disappears in a puff of smoke." I wasn't coming back for a measly $1K, I was coming back for another million! I'll see if they'll let me play again. Omega already KNOWS I'm greedy, this won't come as a shock. He'll probably have told his team what to say when I try it.

" ... and come back for more." was meant to be funny.

Anyway, this still doesn't answer my questions about "Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars."

Someone please answer my questions! Thanks!

Comment by RickJS on Ingredients of Timeless Decision Theory · 2009-09-24T17:10:02.002Z · LW · GW

Well, I mulled that over for a while, and I can't see any way that contributes to answering my questions.

As to " ... what does your choice effect and when?", I suppose there are common causes starting before Omega loaded the boxes, that affect both Omega's choices and mine. For example, the machinery of my brain. No backwards-in-time is required.

Comment by RickJS on Ingredients of Timeless Decision Theory · 2009-09-22T00:28:53.472Z · LW · GW

In Eliezer's article on Newcomb's problem, he says, "Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. " Such evidence from previous players fails to appear in some problem descriptions, including Wikipedia's.

For me this is a "no-brainer". Take box B, deposit it, and come back for more. That's what the physical evidence says. Any philosopher who says "Taking BOTH boxes is the rational action," occurs to me as an absolute fool in the face of the evidence. (But I've never understood non-mathematical philosophy anyway, so I may a poor judge.)

Clarifying (NOT rhetorical) questions:

Have I just cheated, so that "it's not the Newcomb Problem anymore?"

When you fellows say a certain decision theory "two-boxes", are those theory-calculations including the previous play evidence or not?

Thanks for your time and attention.

Comment by RickJS on Issues, Bugs, and Requested Features · 2009-09-21T00:03:49.363Z · LW · GW

LessWrong.com sends the user's password in the clear (as reported by ZoneAlarm Extreme Security 8.

Please consider warning people that is so.

Comment by RickJS on Why Our Kind Can't Cooperate · 2009-09-19T23:41:34.403Z · LW · GW

Oh. My mistake. When you wrote, "Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.", I read:

  • [Totalitarian rule... ] ... [is] ... the best way to destroy humanity, (as in cause and effect.)
  • OR maybe you meant: wishing ... [is] ... the best way to destroy humanity

It just never occurred to me you meant, "a god-like totalitarian pretty much comes out where extinction does in my utility function".

Are you willing to consider that totalitarian rule by a machine might be a whole new thing, and quite unlike totalitarian rule by people?

Comment by RickJS on Why Our Kind Can't Cooperate · 2009-09-19T23:01:55.152Z · LW · GW

OK.

Actually, I'm going to restrain myself to just clarifying questions while I try to learn the assumed, shared, no-need-to-mention-it body of knowledge you fellows share.

Thanks.

Comment by RickJS on Dissolving the Question · 2009-09-12T03:47:36.312Z · LW · GW

HOMEWORK REPORT

With some trepidation! I'm intensely aware I don't know enough.

"Why do I believe I have free will? It's the simplest explanation!" (Nothing in neurobiology is simple. I replace Occam's Razor with a metaphysical growth restriction: Root causes should not be increased without dire necessity).

OK, that was flip. To be more serious:

Considering just one side of the debate, I ask: "What cognitive architecture would give me an experience of uncaused, doing-whatever-I-want, free-as-a-bird Capricious Action that is so strong that I just can't experience (be present to) being a fairly deterministic machine?"

Cutting it down to a bare minimum: I imagine that I have a Decision Module (DM) that receives input from sensory-processing modules and suggested-action modules at its "boundary", so those inputs are distinguishable from the neuron-firings inside the boundary: the ones that make up the DM itself. IMO, there is no way for those internal neuron firings to be presented to the input ports. I guess that there is no provision for the DM to sense anything about its own machinery.

By dubious analogy, a Turing machine looks at its own tapes, it doesn't look at the action table that determines its next action, nor can it modify that table.

To a first approximation, no matter what notion of cause and effect I get, I just can't see any cause for my own decisions. Even if somebody asks, "Why did you stay and fight?", I'm just stuck with "It seemed like a good idea at the time!"

And these days, it seems to me that culture, the environment a child grows up within, is just full of the accouterments of free will: make the right choice, reward & punishment, shame, blame, accountability, "Why did you write on the wall? How could you be so STUPID!!?!!", "God won't tempt you beyond your ability to resist." etc.

Being a machine, I'm not well equipped to overcome all that on the strength of mere evidence and reason.

Now I'll start reading The Solution, and see if I was in the right ball park, or even the right continent.

Thanks for listening.

Comment by RickJS on Righting a Wrong Question · 2009-09-12T02:36:37.189Z · LW · GW

META: thread parser failed?

It sounds like these posts should have been a sub-thread instead of all being attached to the original article?:

09 March 2008 11:05:11PM
09 March 2008 11:33:14PM
10 March 2008 01:14:45AM

Also, see the mitchell porter2 - Z. M. Davis - Frank Hirsch - James Blair - Unknown discussion below.

Comment by RickJS on Why Our Kind Can't Cooperate · 2009-09-11T19:11:08.977Z · LW · GW

Vladimir_Nesov wrote on 11 September 2009 08:34:32AM:

This only makes it worse, because you can't excuse a signal.

This only makes what worse? Does it makes me sound more fanatical?

Please say more abut "you can't excuse a signal". Did you mean I can't reverse the first impression the signal inspired in somebody's mind? Or something else?

Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

OK I'll start with a prior = 10% that I am fanatical and / or caught in an affective death spiral.

What do you recommend I do about my preachy style?

I appreciate your writings on LessWrong. I'm learning a lot.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

Comment by RickJS on Why Our Kind Can't Cooperate · 2009-09-11T18:32:45.957Z · LW · GW

Jack wrote on 09 September 2009 05:54:25PM:

Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.

I don't wish for it. That part was inside parentheses with a question mark. I merely suspect it MAY be needed.

Please explain to me how the destruction follows from the rule of a god-like totalitarian.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

Comment by RickJS on Why Our Kind Can't Cooperate · 2009-09-11T18:06:05.522Z · LW · GW

Jack wrote on 09 September 2009 05:54:25PM :

I can't help but think that those activities aren't going to do much to save humanity.

I hear that. I wasn't clear. I apologise.

I DON'T KNOW what I can do to turn humanity's course. And, I decline to be one more person who uses that as an excuse to go back to the television set. Those activities are part of my search for a place where I can make a difference.

"Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman.

... but not acceptable from a mere man who cares, eh?

(Oh, all right, I admit, the ™ was tongue-in-cheek!)

Skip down to END BOILERPLATE if and only if you've read version v44m

First, please read this caveat: Please do not accept anything I say as True.

Ever.

I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say, and I suggest you don't, either.

Second, I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".

If you have a reaction (e.g. "That's WRONG!"), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you'll see something even beyond what I offered.

There will plenty of time to criticize, attack, and destroy it AFTER you've "panned for the gold". You won't be missing an opportunity.

Third, I want you to "get" what I offered. When you "get it", you have it. You can pick it up and use it, and you can put it down. You don't need to believe it or understand it to do that. Anything you BELIEVE is "glued to your hand"; you can't put it down.

-=-= END BOILERPLATE version 44m

I think we may have different connotations. I'm going to reluctantly use an analogy, but it's just a temporary crutch. Please drop it as soon as you get how I'm using the word 'saving'.

If I said, "I'm playing football," I wouldn't be implying that I'm a one-man team, or that I'm the star, or that the team always loses when I'm not there. Rigorously, it only means that I'm playing football.

However, it is possible to play football for the camaraderie, or the exercise, or to look good, or to avoid losing. A person can play football to win. Regardless of the position played. It's about attitude, commitment, and responsibility SEIZED rather than reluctantly accepted.

I DECLARE that I am saving humanity from Homo Sapiens. That's a declaration, a promise, not a description subject to True / probability / False. I'm playing to win.

Maybe I'll never be allowed to get on the field. I remember the movie Rudy, about Dan Ruettiger. THAT is what it is to be playing football in the face of being a little guy. That points toward what it is to be Saving Humanity from Homo Sapiens in the face of no evidence and no agreement.

You could give me a low probability of ever making a difference . But before you do, ask yourself, "What will this cause?"

It occurs to be that this little sub-thread beginning with "Mostly, I study. " illustrates what Eliezer was pointing out in "Why Our Kind Can't Cooperate.".

  • "Some things are worth dying for. Yes, really! And if we can't get comfortable with admitting it and hearing others say it, then we're going to have trouble caring enough - as well as coordinating enough - to put some effort into group projects. You've got to teach both sides of it, "That which can be destroyed by the truth should be," and "That which the truth nourishes should thrive." "

You, too, can be Saving Humanity from Homo Sapiens. You start by saying so.

The clock is ticking.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, even if I NEVER get on the field)

Comment by RickJS on Why Our Kind Can't Cooperate · 2009-09-11T00:01:38.393Z · LW · GW

I've been told that my writing sounds preachy or even religious-fanatical. I do write a lot of propositions without saying "In my opinion" in front of each one. I do have a standard boilerplate that I am to put at the beginning of each missive:

First, please read this caveat: Please do not accept anything I say as True.

Ever.

I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say, and I suggest you don't, either.

Second, I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".

If you have a reaction (e.g. "That's WRONG!"), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you'll see something even beyond what I offered.

There will plenty of time to criticize, attack, and destroy it AFTER you've "panned for the gold". You won't be missing an opportunity.

Third, I want you to "get" what I offered. When you "get it", you have it. You can pick it up and use it, and you can put it down. You don't need to believe it or understand it to do that. Anything you BELIEVE is "glued to your hand"; you can't put it down.

-=-= END Boilerplate

In that post, I got lazy and just threw in the tag line at the end. My mistake. I apologize. I won't do that again.

With respect and high regard,
Rick Schwall
Saving Humanity from Homo Sapiens (playing the game to win, but not claiming I am the star of the team)

Comment by RickJS on Ingredients of Timeless Decision Theory · 2009-09-10T23:23:12.742Z · LW · GW

Ah. Thanks! I think I get that.

But maybe I just think I do. I thought I understood that narrow part of Wei Dai's post on a problem that maybe defeats TDT. I had no idea that compassion had already been considered and compensated out of consideration. And that's such common shared knowledge here in the LessWrong community that it need not be mentioned.

I have a lot to learn. I now see I was very arrogant think I could contribute here. I should read the archives & wiki before I post. I apologize.

<<Begins to compute an estimated time to de-lurk. They collectively write several times faster than I can read, even if I don't slow down to mull it over. Hmmm... >>

Comment by RickJS on Open Thread: May 2009 · 2009-09-10T23:08:08.259Z · LW · GW

THanks for the clarification.

I guess I won't be posting articles to LessWrrong, as I have no clue what I'm doing wrong such that I get more downvotes than upvotes.

Comment by RickJS on Open Thread: May 2009 · 2009-09-10T03:56:26.421Z · LW · GW

I would like some clarification on "LW doesn't register negative karma right now." Does that mean

  • my negative points are GONE, or
  • they are hiding and still need to be paid off before I can get a positive score?

Thanks

Comment by RickJS on Ingredients of Timeless Decision Theory · 2009-09-10T02:05:11.790Z · LW · GW

Inorite? What is that?

I suspect I'm not smart enough to play on this site. I'm quite unsure I can even parse your sentence correctly, and I can't imagine a reason to adjust the external payoff matrices (they were given by Wei Dai, that is the original problem I'm discussing) so the internal payoff mtrices match something. I'm baffled.

Comment by RickJS on Ingredients of Timeless Decision Theory · 2009-09-09T17:23:42.842Z · LW · GW

Eliezer_Yudkowsky wrote on 19 August 2009 03:24:46PM:

Formal cooperation in the one-shot PD, now that should be interesting.

Tversky demonstrated: One experiment based on the simple dilemma found that approximately 40% of participants played "cooperate" (i.e., stayed silent). Hmmm...

Compassion (in a certain sense) may be part of your answer.

If I (as Prisoner A) have a term in my utility function such that an injury to Prisoner B is an injury to me (discounted), it can make 'Cooperate' much more attractive.

I might have enough compassion to be willing to do 6 months in jail if it will spare Prisoner B a 2-year prison term (or more).

For example, given the external payoff matrix given by Wei Dai (http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/11w9) (19 August 2009 07:08:23AM):

 * Cooperate    5/5    0/6
 * Defect       6/0    1/1

My INTERNAL payoff matrix becomes:

 * Cooperate    6.25/5    1.5/6
 * Defect       6/0        1.25/1

And 'Cooperate' now strictly dominates using elementary game theory.

Thank you for your time and consideration.
RickJS

Comment by RickJS on Why Our Kind Can't Cooperate · 2009-09-09T16:56:38.618Z · LW · GW

Mostly, I study. I also go to a few conferences (I'll be at the Singularity Summit) and listen. I even occasionally speak on key issues (IMO), such as (please try thinking WITH these before attacking them. Try agreeing for at least a while.):

  • "There is no safety in assuring we have a power switch on a super-intelligence. That would be power at a whole new level. That's pretty much Absolute Power and would bring out the innate corruption / corruptibility / self-interest in just about anybody."
  • "We need Somebody to take the dangerous toys (arsenals) away."
  • "Just what is Humanity up to that requires 6 Billion individuals?"

All of that is IN MY OPINION. <-- OK, the comments to this post showed me the error of my ways. I'm leaving this here because comments refer to it.

Edited 07/14/2010 because I've learned since 2009-09 that I said a lot of nonsense.

Comment by RickJS on Why Our Kind Can't Cooperate · 2009-09-08T18:18:43.125Z · LW · GW

BRAVO, Eliezer! Huuzah! It's about time!

I don't know if you have succeeded in becoming a full rationalist, but I know I haven't! I keep being surprised / appalled / amused at my own behavior. Intelligence is way overrated! Rationalism is my goal, but I'm built on evolved wet ware that is often in control. Sometimes my conscious, chooses-to-be-rationalist mind is found to be in the kiddy seat with the toy steering wheel.

I haven't been publicly talking about my contributions to the Singularity Institute and others fighting to save us from ourselves. Part of that originates in my father's attitude that it is improper to brag.

I now publicly announce that I have donated at least $11,000 to the Singularity Institute and its projects over the last year. I spend ~25 hours per week on saving humanity from Homo Sapiens.

I say that to invite others to JOIN IN. Give humanity a BIG term in your utility function. Extinction is Forever. Extinction is for ... us?

Thank you, Eliezer! Once again, you've shown me a blind spot, a bias, an area where I can now be less wrong than I was.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens™ :-|

Comment by RickJS on Ingredients of Timeless Decision Theory · 2009-08-29T03:04:59.439Z · LW · GW

First of all, congratulations, Eliezer! That's great work. When I read your 3-line description, I thought it would never be computable. I'm glad to see you can actually test it.

Eliezer_Yudkowsky wrote on 19 August 2009 03:05:15PM

... Moving second is a disadvantage (at least it seems to always work out that way, counterexamples requested if you can find them)

Rock-paper-scissors ?
Negotiating to buy a car?

I would like to begin by saying that I don't believe my own statements are True, and I suggest you don't either. I do request that you try thinking WITH them before attacking them. It's really hard to think with an idea AFTER you've attacked it. I've been told my writing sounds preachy or even fanatical. I don't say "In My Opinion" enough. Please imagine "IMO" in front of every one of my statements. Thanks!

Having more information (not incorrect "information") on the opponent's decisions is beneficial.

Let's distinguish Secret Commit & Simultaneous Effect (SCSE) from Commit First & Simultaneous Effect (CFSE) and from Act & Effect First (AEF). That's just a few categories from a coarse categorization of board war games.

The classic gunfight at high noon is AEF (to a first approximation, not counting watching his face & guessing when his reaction time will be lengthened). The fighter who draws first has a serious advantage, the fighter who hits first has a tremendous advantage, but not certain victory. (Hollywood not withstanding, people sometimes keep fighting after taking handgun hits, even a dozen of them.) I contend that all AEFs give advantage to the first actor. Chess is AEF.

My understanding of the Prisoner's Dilemma is that it is SCSE as presented. On this thread, it seems to have mutated into a CFSE (otherwise, there just isn't any "first", in the ordinary, inside-the-Box-Universe, timeful sense). If Prisoner A has managed to get information on Prisoner B's commitment before he commits, this has to be useful. Even if PA is a near-Omega, it can be a reality check on his Visualization of the Cosmic All. In realistic July 2009 circumstances, it identifies PB as one of the 40% of humans who choose 'cooperate' in one-shot PD. PA now has a choice whether to be an economist or a friend.

And now we get down to something fundamental. Some humans are better people than the economic definition of rationality, which " ... assume that each player cares only about minimizing his or her own time in jail". " ... cooperating is strictly dominated) by defecting ... " even with leaked information.

"I don't care what happens to my partner in crime. I don't and I won't. You can't make me care. On the advice of my economist... " That gets both prisoners a 5-year sentence when they could have had 6 months.

That is NOT wisdom! That will make us extinct. (In My Opinion)

Now try on "an injury to one is an injury to all". Or maybe "an injury to one is an (discounted) injury to ME". We just might be able to see that the big nuclear arsenals are a BAD IDEA!

Taking that on, the payoff matrix offered by Wei Dai's Omega (19 August 2009 07:08:23AM)

* cooperate    5/5    0/6
* defect       6/0    1/1

is now transformed into PA's Internal Payoff Matrix (IPM)

* cooperate    5+5κ/5    0+6κ/6
* defect       6+0κ/0    1+1κ/1

In other words, his utility function has a term for the freedom of Prisoner B. (Economists be damned! Some of us do, sometimes.)

"I'll set κ=0.3 ," Says PA (well, he is a thief). Now PA's IPM is:

* cooperate    6.5/5    1.8/6
* Defect      6/0       1.3/1

Lo and behold! 'cooperate' now strictly dominates!

When over 6 billion people are affected, it doesn't take much of a κ to swing my decisions around. If I'm not working to save humanity, I must have a very low κ for each distant person unknown to me.

People say, "Human life is precious!" Show it to me in results. Show it to me in how people budget their time and money. THAT is why Friendly AI is our only hope. We will 'defect' our way into thwarting any plan that requires a lot of people to change their beliefs or actions. That sub-microscopic κ for unknown strangers is evolved-in, it's not going away. We need a program that can be carried out by a tiny number of people.

.

.

.

IMO.

---=

Maybe I missed the point. Maybe the whole point of TDT is to derive some sort of reduced-selfishness decision norm without an ad-hoc utility function adjustment (is that what "rained down from heaven" means?). I can derive the κ needed in order to save humanity, if there were a way to propagate it through the population. I cannot derive The One True κ from absolute principles, nor have I shown a derivation of "we should save humanity". I certainly fell short of " ... looking at which agents walk away with huge heaps of money and then working out how to do it systematically ... ". I would RATHER look at which agents get their species through their singularity alive. Then, and only then, can we look at something grander than survival. I don't grok in fullness "reflective consistency", but from extinction we won't be doing a lot of reflecting on what went wrong.

IMO.

Now, back to one-shot PD and "going first". For some values of κ and some external payoff matrices (not this one), the resulting IPM is not strictly dominated, and having knowledge of PB's commitment actually determines whether 'cooperate' or 'defect' produces a better world in PA's internal not-quite-so-selfish world-view. Is that a disadvantage? (That's a serious, non-rhetorical question. I'm a neophyte and I may not see some things in the depths where Eliezer & Wei think.)

Now let's look at that game of chicken. Was "throw out the steering wheel" in the definition of the thought experiment? If not, that player just changed the universe-under-consideration, which is a fairly impressive effect in an AEF, not a CFSE.

If re-engineering was included, then Driver A may complete his wheel-throwing (while in motion!) only to look up and see Driver B's steering gear on a ballistic trajectory. Each will have a few moments to reflect on "always get away with it."

If Driver A successfully defenestrates first, is Driver B at a disadvantage? Among humans, the game may be determined more by autonomic systems than by conscious computation, and B now knows that A won't be flinching away. However, B now has information and choices. One that occurs to me is to stop the car and get out. "Your move, A." A truly intelligent player (in which category I do not, alas, qualify) would think up better, or funnier, choices.

Hmmm... to even play Chicken you have to either be irrational or have a damned strange IPM. We should establish that before proceeding further.

I challenge anyone to show me a CFSE game that gives a disadvantage to the second player.

I'm not too proud to beg: I request your votes. I've got an article I'd like to post, and I need the karma.

Thanks for your time and attention.

RickJS
Saving Humanity from Homo Sapiens

08/28/2009 ~20:10 Edit: formatting ... learning formatting ... grumble ... GDSOB tab-deleter ... Fine. I'll create the HTML for tables, but this is a LOT of work for 3 simple tables ... COMMENT TOO LONG!?!? ... one last try ... now I can't quit, I'm hooked! ... NAILED that sucker! ... ~22:40 : added one more example *YAWN*

Comment by RickJS on Ingredients of Timeless Decision Theory · 2009-08-28T03:38:11.878Z · LW · GW

Wei_Dai wrote on 19 August 2009 07:08:23AM :

... Omega's AIs will reason as follows: "I have 1/2 chance of playing against a TDT, and 1/2 chance of playing against a CDT. If I play C, then my opponent will play C if it's a TDT, and D if it's a CDT ...

That seems to violate the secrecy assumptions of the Prisoner's Dilemma problem! I thought each prisoner has to commit to his action before learning what the other one did. What am I missing?

Thanks!

Comment by RickJS on Issues, Bugs, and Requested Features · 2009-04-22T18:33:38.944Z · LW · GW

About that report link (http://lesswrong.com/ ???): It doesn't say what it's going to do, what it is for (hate speech, strong language, advocating the overthrow, trolling, disagreeing with me...), nor does it give me a chance to explain.

Comment by RickJS on Issues, Bugs, and Requested Features · 2009-04-22T18:11:35.817Z · LW · GW

I don't see a way to send my new article to the mods. When I'm done editing in my drafts folder, then what?

Comment by RickJS on Issues, Bugs, and Requested Features · 2009-04-22T17:09:05.481Z · LW · GW

Terminology. Try to be consistent. "Liked" and "Vote Up": pick one and stick with it. IMHO

Comment by RickJS on Issues, Bugs, and Requested Features · 2009-04-22T16:49:26.242Z · LW · GW

How about a basic Users' Guide, and include a link to it right in the top links bar?

Comment by RickJS on Bayesians vs. Barbarians · 2009-04-22T16:30:23.969Z · LW · GW

Consider (think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".)

Consider that you are using "we" and "self" as a pointer that jumps from one set to another moment by moment. Here is a list of some sets that may be confounded together here, see how many others you can think of. These United States (see the Constitution)

the people residing in that set

citizens who vote

citizens with a peculiar attitude

the President

Congress

organizations (corporations, NGOs, political parties, movements, e-communities, etc.)

the wealthy and powerful

the particular wealthy and powerful who see an opportunity to benefit from an invasion

Multiple Edits: trying to get this site to respect line/ paragraph breaks, formatting. Does this thing have any formatting codes?