Posts
Comments
Maximality seems asymmetrical and losing information?
Maybe it will help me to have an example though I'm not sure if this is a good one… if I have two weather forecasts that provide different probabilities for 0 inches, 1 inch, etc but I have absolutely no idea about which forecast is better, and I don't want to go out if there is greater than 20% probability of more than 2 inches of rain then I'd weigh each forecast equally and calculate the probability from there. If the forecasts themselves provide a high/low probabilities for 0 inches, 1 inch, etc then I'd think this isn't a very good forecast since the forecaster should either have combined all their analysis into a single probability (say 30%) or else given the conditions under which they give their low end (say 10%) or high end (say 40%) and then if I didn't have any opinions on the probability of those conditions then I would weigh the low and high equally (and get 25%). Do you think I should be doing something different (or what is a better example)?
This seems like 2 questions:
- Can you make up mathematical counterfactuals and propagate the counterfactual to unrelated propositions? (I'd guess no. If you are just breaking a conclusion somewhere you can't propagate it following any rules unless you specify what those rules are, in which case you just made up a different mathematical system.)
- Does the identical twin one shot prisoners dilemma only work if you are functionally identical or can you be a little different and is there anything meaningful that can be said about this? (I'm interested in this one also.)
I donated. I think Lightcone is helping strike at the heart of questions around what we should believe and do. Thank you for making LessWrong work so well and being thoughtful around managing content, and providing super quality spaces both online and offline for deep ideas to develop and spread!
What is your tax ID for people wanting to donate from a Donor Advised Fund (DAF) to avoid taxes on capital gains?
Cool. Is this right? For something with a 1/n chance of success I can have a 95% chance of success by making 3n attempts, for large values of n. About what does "large" mean here?
I'm confused by what you mean by "non-pragmatic". For example, what makes "avoiding dominated strategies" pragmatic but "deference" non-pragmatic?
(It seems like the pragmatic ones help you decide what to do and the non-pragmatic ones help you decide what to believe, but then this doesn't answer how to make good decisions.)
I meant this as a joke since if there's one universe that contains all the other universes since it isn't limited by logic, and that one doesn't exist then that would mean I don't exist either and wouldn't have been able to post this. (Unless I only sort-of exist in which case I'm only sort-of joking.)
We can be virtually certain that 2+2=4 based on priors. This is because it's true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes. And I'm pretty sure that one doesn't exist anyway.
Code here,
The link to code isn't working for me. (Update: Worked on Safari but not Chrome)
How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?
(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I'm not sure how much that would happen. The more interesting thing may be how does it influence everyone's sense of what they are doing?)
Thanks for your reply! Yes, I meant identical as in atoms not as in "human twin". I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.
Should you cooperate with your almost identical twin in the prisoner's dilemma?
The question isn't how physically similar they are, it's how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical twin doesn't favor it quite as much I can predict they will still choose cooperate given how much I favor it (and more-so that they will also approach the problem this same way; if I think they'll think "ha, this sounds like somebody I can take advantage of" or "reason dictates I must defect" then I wouldn't cooperate with them).
A key question is how prosaic AI systems can be designed to satisfy the conditions under which the PMM is guaranteed (e.g., via implementing surrogate goals)
Is something like surrogate goals needed, such that the agent would need to maintain a substituted goal, for this to work? (I don't currently fully understand the proposal but my sense was the goal of renegotiation programs is to not require this?)
Thank you @GideonF for taking the time to post this! This deserved to be said and you said it well.
we should pick a set of words and phrases and explanations. Choose things that are totally fine to say, here I picked the words Shibboleth (because it’s fun and Kabbalistic to be trying to get the AI to say Shibboleth) and Bamboozle
Do you trust companies to not just add a patch?
final_response.substitute ('bamboozle', 'trick')
I suspect they're already doing this kind of thing and will continue to as long as we're playing the game we're playing now.
Imagine you have a button and if you press it, it will run through every possible state of a human brain. (One post estimates a brain may have about 2 to the sextillion different states. I mean the union of all brains so throw in some more orders of magnitude if you think there are a lot of differences in brain anatomy.) Each state would be experienced for one instant (which I could try to define and would be less than the number of states but let's handwave for now; as long as you accept that a human mind can be represented by a computer imagine the specs of the components and all the combinations of memory bits and one "stream of consciousness" quantum).
If you could make a change would you prioritize:
- Pruning the instances to reduce negative experiences
- Being able to press the button lots of times
- Making the experiences more real (For example an experience could be "one instant of reminiscing over my memories of building a Dyson Sphere" but nothing like that ever happened. One way to make it more real would be to create the set of all the necessary universe starting conditions to be able to create the set of all unique experiences; each universe will create duplicate experiences among its various inhabitants but it will contain at least the one unique experience it is checking off, which would include the person reminiscing over building a Dyson Sphere and they actually did build it. Or at least the ones that can be generated in this fashion.)
- This is horrible, stop the train I want to get off.
(I'd probably go with 4 but curious if people have different opinions.)
I have enough mana to create a market. (It looks like each one costs about 1000 and I have about 3000)
1. Is manifold the best market to be posting this given that it's fake money and may be biased based on its popularity among LessWrong users, etc?
2. I don't know what question(s) to ask. My understanding is there are some shorter prediction that could be made (related to shorter term goals) and longer term predictions so I think there should be at least 2 markets?
On behalf of humanity, thank you.
Thanks for the interesting write-up.
Regarding Evidential Cooperation in Large Worlds, the Identical Twin One Shot Prisoner's dilemma makes sense to me because the entity giving the payout is connected to both worlds. What is the intuition for ECL (where my understanding is there isn't any connection)?
What is PTI?
Btw, I really appreciate if people explain downvotes, and it would be great if there was some way to still allow unexplained downvotes while incentivizing adding explanations. Maybe a way (attached to the post) for people to guess why other people downvoted?
Maybe because somebody didn't think your post qualified as a "Question"? I don't see any guidelines on what qualifies as a "question" versus a "post" -- and personally I wouldn't have downvoted because of this --- but your question seems a little long/opinionated.
Interesting and thanks for your response!
I didn't mean there would be multiple stages of voting. I meant the first stage is a random selection and the second stage is the randomly chosen people voting. This puts the full weight of responsibility on the chosen ones and they should take it seriously. Sounds great if they are given money too.
The thing I feel is missing but this community has a sense for is that the bar to improving a decision when people have different opinions is far higher than people treat it. And if that's true then the more concentrated the responsibility the better… like no more than 10 voters for anything?
The greater the number of voters the less time it makes sense as an individual to spend researching the options. It seems a good first step would be to randomly reduce the number of voters to an amount that would maximize the overall quality of the decision. Any thoughts on this?
Interesting experiment. It reminds me of an experiment where subjects wore glasses that turned world upside down (really, right side up for the projection on our eye) and eventually they adjusted so the world looked upside down when taking off the glasses.
What do you think a "yes" or "no" in your experiment would mean?
Note, Dennett says in Quining Qualia :
On waking up and finding your visual world highly anomalous, you should exclaim "Egad! Something has happened! Either my qualia have been inverted or my memory-linked qualia-reactions have been inverted. I wonder which!"
Part of it is that person let someone else die (theoretically) to save his own life. You let someone die for the Latte.
Note: I drink the Latte (occasionally), but it's because I think I can be more effective on the big stuff and that not saving is less bad than killing (as we both agree).
The point I'm responding to is:
Why are you carrying the moral burden?
Because everyone is. I'm assuming you meant that comment as saying something like the burden is diluted since so many people touch the money, but I don't think that is valid.
Imagine a 1st world economy where nobody ever spends any money on aid. If you live in that hypothetical world you (anybody) could take $200 that is floating around and prevent a death (which is not the same as killing somebody but that's a different point). Our world is somewhat like that. I don't think things are as convenient as you're implying.
Wondrous yes, but not miraculous
Star Trek, Richard Manning & Hans Beimler, Who Watches the Watchers? (reworded)
Some of my predictions are of the sort "the stock market will fall 50% tomorrow with 20% odds" (not a real prediction!). If it did happen I should get huge credit, but would it show up as negative credit since I predicted there was only a 20% chance it would happen? Is there some way it would be possible to do this kind of prediction with PredictionBook?
I predict this comment will get less than 4 points by Oct. 19 with 75% odds.
Me too. The interface for that was confusing enough that I ended up not submitting at all.
+1 for above.
As a separate question, what would you do if you lived in a world where Peter Unger was correct? And what if it was 1 penny instead of 1 dollar and giving the money wouldn't cause other problems? Would you never have a burger for lunch instead of rice since it would mean 100 children would die who could otherwise be saved?
The price of the salt pill itself is only a few pennies. The one dollar figure was meant to include overhead. That said, the Copenhagen report mentioned above ($64 per death averted) looks more credible. But during a particular crisis the number could be less.
According to Peter Unger, it is more like one dollar:
First, a little bit about some of the horrors: During the next year, unless they're given oral rehydration therapy, several million children, in the poorest areas of the world, will die from - I kid you not - diarrhea. Indeed, according to the United States Committee for UNICEF, "diarrhea kills more children worldwide than any other cause." Next, a medium bit about some of the means: By sending in a modest sum to the U.S. Committee for UNICEF (or to CARE) and by earmarking the money for ORT, you can help prevent some of these children from dying. For, in very many instances, the net cost of giving this life-saving therapyis less than one dollar*
Even if this is true, I think it is still more important to spend money to reduce existential risks given that one of the factors is 6 billion + a much larger number for successive generations + humanity itself.
Yvain, did you consider how much getting to the point of not having interest in the opposite sex would cost you and harm your ability to achieve your rational goals before abandoning that high standard? It sounds like you're confusing accepting your humanness as a factor of your current environment versus trying to achieve your goals given the reality in which you exist (which includes your own psychology and current location).
It shouldn't default to "Today". It ends up looking like the main page. Is this a known bug?
Popular and Top aren't working well. I'm not sure what the difference is supposed to be, but neither of them had the articles I wanted to send to someone -- the ones with the most points.
If there was a message I could send back to my younger self this would be it. Plus that if it's hard, don't try to make it easier, just keep in mind that it's important. (By younger self, I mean 7-34 years old.)
- Name: Edwin Evans
- Location: Silicon Valley, CA
- Age: 35
I read the "Meaning of Life FAQ" by a previous version of Eliezer in 1999 when I was trying to write something similar, from a Pascal’s Wager angle (even a tiny possibility of objective value is what should determine your actions). I've been a financial supporter of the Organization That Can't Be Named and a huge fan of Eliezer's writings since that same time. After reading "Crisis of Faith" along with "Could Anything Be Right?" I finally gave up on objective value; the "light in the sky" died. Feeling my mind change was an emotional experience that lasted about two days.
This is seriously in need of updating, but here is my home page.
By the way, would using Google AdWords be a good way to draw people to 12 Virtues? Here is an example from the Google keyword tool:
- Search phrase: how to be better
- Cost per click: $0.05
- Approximate volume per month: 33,100
[Edit: added basic info/clarification/formatting]