Posts
Comments
1%? Shouldn't your basic uncertainty over models and paradigms be great enough to increase that substantially?
I think it's about a 0.75 probability, conditional upon smarter-than-human AI being developed. Guess I'm kind of an optimist. TL;DR I don't think it will be very difficult to impart your intentions into a sufficiently advanced machine.
I haven't seen any parts of Givewell's analyses that involve looking for the right buzzwords. Of course, it's possible that certain buzzwords subconsciously manipulate people at Givewell in certain ways, but the same can be said for any group, because every group has some sort of values.
Why do you expect that to be true?
Because they generally emphasize these values and practices when others don't, and because they are part of a common tribe.
How strongly? ("Ceteris paribus" could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?
Somewhat weakly, but not extremely weakly. Obviously there is no single clear criteria, it's just about people's philosophical values and individual commitment. At most, I think that being a solid EA is about as important as having a couple additional years of relevant experience or schooling.
I do think that if you had a research-focused organization where everyone was an EA, it would be better to hire outsiders at the margin, because of the problems associated with homogeneity. (This wouldn't the case for community-focused organizations.) I guess it just depends on where they are right now, which I'm not too sure about. If you're only going to have 1 person doing the work, e.g. with an EA fund, then it's better for it to be done by an EA.
I bet that most of the people who donated to Givewell's top charities were, for all intents and purposes, assuming their effectiveness in the first place. From the donor end, there were assumptions being made either way (and there must be; it's impractical to do all kinds of evaluation on one's own).
I think EA is something very distinct in itself. I do think that, ceteris paribus, it would be better to have a fund run by an EA than a fund not run by an EA. Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other's points of view, and maintain civil and constructive dialogue than I do for other people. And secondly, EA simply has the right values. It's a good culture to spread, which involves more individual responsibility and more philosophical clarity. Right now it's embryonic enough that everything is tied closely together. I tentatively agree that that is not desirable. But ideally, growth of thoroughly EA institutions should lead to specialization and independence. This will lead to a much more interesting ecosystem than if the intellectual work is largely outsourced.
It seems to me that Givewell has already acknowledged perfectly well that VillageReach is not a top effective charity. It also seems to me that there's lots of reasons one might take GiveWell's recommendations seriously, and that getting "particularly horrified" about their decision not to research exactly how much impact their wrong choice didn't have is a rather poor way to conduct any sort of inquiry on the accuracy of organizations' decisions.
In fact, it seems to me that the less intelligent an organism is, the easier its behavior can be approximated with model that has a utility function!
Only because those organisms have fewer behaviors in general. If you put a human in an environment where its options and sensory inputs were as simple as those experienced by apes and cats, humans would probably look like equally simple utility maximizers.
Kantian ethics: do not violate the categorical imperative. It's derived logically from the status of humans as rational autonomous moral agents. It leads to a society where people's rights and interests are respected.
Utilitarianism: maximize utility. It's derived logically from the goodness of pleasure and the badness of pain. It leads to a society where people suffer little and are very happy.
Virtue ethics: be a virtuous person. It's derived logically from the nature of the human being. It leads to a society where people act in accordance with moral ideals.
Etc.
pigs strike a balance between the lower suffering, higher ecological impact of beef and the higher suffering, lower ecological impact of chicken.
This was my thinking for coming to the same conclusion. But I am not confident in it. Just because something minimaxes between two criteria doesn't mean that it minimizes overall expected harm.
All of the architectures assumed by people who promote these scenarios have a core set of fundamental weaknesses (spelled out in my 2014 AAAI Spring Symposium paper).
The idea of superintelligence at stake isn't "good at inferring what people want and then decides to do what people want," it's "competent at changing the environment". And if you program an explicit definition of 'happiness' into a machine, its definition of what it wants - human happiness - is not going to change no matter how competent it becomes. And there is no reason to expect that increases in competency lead to changes in values. Sure, it might be pretty easy to teach it the difference between actual human happiness and smiley faces, but it's a simplified example to demonstrate a broader point. You can rephrase it as "fulfill the intentions of programmers", but then you just kick things back a level with what you mean by "intentions", another concept which can be hacked, and so on.
Your argument for "swarm relaxation intelligence" is strange, as there is only one example of intelligence evolving to approximate the format you describe (not seven billion - human brains are conditionally dependent, obviously), and it's not even clear that human intelligence isn't equally well described as goal directed agency which optimizes for a premodern environment. The arguments in Basic AI Drives and other places don't say anything about how AI will be engineered, so they don't say anything about whether they're driven by logic, just about how it will behave, and all sorts of agents behave in generally logical ways without having explicit functions to do so. You can optimize without having any particular arrangement of machinery (humans do as well).
Anyway, in the future when making claims like this, it would be helpful to make it clear early on that you're not really responding to the arguments that AI safety research relies upon - you're responding to an alleged set of responses to the particular responses that you have given to AI safety research.
That is why I said what I said. We discussed it at the 2014 Symposium If I recall correctly Steve used that strategy (although to be fair I do not know how long he stuck it out). I know for sure that Daniel Dewey used the Resort-to-RL maneuver, because that was the last thing he was saying as I had to leave the meeting.
So you had two conversations. I suppose I'm just not convinced that there is an issue here: I think most people would probably reject the claims in your paper in the first place, rather than accepting them and trying a different route.
I came here to write exactly what gjm said, and your response is only to repeat the assertion "Scenarios in which the AI Danger comes from an AGI that is assumed to be an RL system are so ubiquitous that it is almost impossible to find a scenario that does not, when push comes to shove, make that assumption."
What? What about all the scenarios in IEM or Superintelligence? Omohundro's paper on instrumental drives? I can't think of anything which even mentions RL, and I can't see how any of it relies upon such an assumption.
So you're alleging that deep down people are implicitly assuming RL even though they don't say it, but I don't see why they would need to do this for their claims to work nor have I seen any examples of it.
In Bostrom's dissertation he says it's not clear if number of observers or the number of observer-moments is the appropriate reference class for anthropic reasoning.
I don't see how you are jumping to the fourth disjunct though. Like, maybe they run lots of simulations which are very short? But surely they would run enough to outweigh humanity's real history whichever way you measure it. Assuming they have posthuman levels of computational power.
In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.
When people talk about moral theories they refer to systems which describe the way that one ought to act or the type of person that one ought to be. Sure, some moral theories can be called "a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals," but I don't see how that changes anything about the definition of a moral theory.
To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”.
Proves my point. That's no different from how most most moral theories respond to questions like "which shirt do I wear". So this 'completeness criterion' has to be made so weak as to be uninteresting.
Among hedonistic utilitarians it's quite normal to demand both completeness
Utilitarianism provides no guidance on many decisions: any decision where both actions produce the same utility.
Even if it is a complete theory, I don't think that completeness is demanded of the theory; rather it's merely a tenet of it. I can't think of any good a priori reasons to expect a theory to be complete in the first place.
The question needs to cover how one should act in all situations, simply because we want to answer the question. Otherwise we’re left without guidance and with uncertainty.
Well first, we normally don't think of questions like which clothes to wear as being moral. Secondly, we're not left without guidance when morality leaves these issues alone: we have pragmatic reasons, for instance. Thirdly, we will always have to deal with uncertainty due to empirical uncertainty, so it must be acceptable anyway.
There is one additional issue I would like to highlight, an issue which rarely is mentioned or discussed. Commonly, normative ethics only concerns itself with human actions. The subspecies homo sapiens sapiens has understandably had a special place in philosophical discussions, but the question is not inherently only about one subspecies in the universe. The completeness criterion covers all situations in which somebody should perform an action, even if this “somebody” isn’t a human being. Human successors, alien life in other solar systems, and other species on Earth shouldn’t be arbitrarily excluded.
I'd agree, but accounts of normativity which are mind- or society-dependent, such as constructivism would have reason to make accounts of ethics for humanity different from accounts of ethics for nonhumans.
It seems like an impossible task for any moral theory based on virtue or deontology to ever be able to fulfil the criteria of completeness and consistency
I'm not sure I agree there. Usually these theories don't because the people who construct them disagree with some of the criteria, especially #1. But it doesn't seem difficult to make a complete and demanding form of virtue ethics or deontology.
See Omohundro's paper on convergent instrumental drives
It seems like hedging is the sort of thing which tends to make the writer sound more educated and intelligent, if possibly more pretentious.
It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance.
It's not. Suppose that the ignorance went away: a complete physical explanation of each of our qualia - "the redness of red comes from these neurons in this part of the brain, the sound of birds flapping their wings is determined by the structure of electric signals in this region," and so on - would do nothing to remove our intuitions about consciousness. But a complete mechanistic explanation of how organ systems work would (and did) remove the intuitions behind vitalism.
I disagree. You've said that epiphenominalists hold that having first-hand knowledge is not causally related to our conception and discussion of first-hand knowledge. This premise has no firm justification.
Well... that's just what is implied by epiphenomenalism, so the justification for it is whatever reasons we have to believe epiphenomenalism in the first place. (Though most people who gravitate towards epiphenomenalism seem to do so out of the conviction that none of the alternatives work.)
Denying it yields my original argument of inconceivability via the p-zombie world.
As I've said already, your argument can't show that zombies are inconceivable. It only attempts to show that an epiphenomenalist world is probabilistically implausible. These are very different things.
Accepting it requires multiplying entities unnecessarily, for if such knowledge is not causally efficacious
Well the purpose of rational inquiry is to determine which theories are true, not which theories have the fewest entities. Anyone who rejects solipsism is multiplying entities unnecessarily.
I previously asked for any example of knowledge that was not a permutation of properties previously observed.
I don't see why this should matter for the zombie argument or for epiphenomenalism. In the post where you originally asked this, you were confused about the contextual usage and meaning behind the term 'knowledge.'
You should take a look at the last comment he made in reply to me, where he explicitly ascribed to me and then attacked (at length) a claim which I clearly stated that I didn't hold in the parent comment. It's amazing how difficult it is for the naive-eliminativist crowd to express cogent arguments or understand the positions which they attack, and a common pattern I've noticed across this forum as well as others.
Not too long ago, it would also have been quite easy to conceive of a world in which heat and motion were two separate things. Today, this is no longer conceivable.
But it is conceivable for thermodynamics to be caused by molecular motion. No part of that is (or ever was, really) inconceivable. It is inconceivable for the sense qualia of heat to be reducible to motion, but that's just another reason to believe that physicalism is wrong. The blog post you linked doesn't actually address the idea of inconceivability.
If something seems conceivable to you now, that might just be because you don't yet understand how it's actually impossible.
No, it's because there is no possible physical explanation for consciousness (whereas there are possible kinetic explanations for heat, as well as possible sonic explanations for heat, and possible magnetic explanations for heat, and so on. All these nonexistent explanations are conceivable in ways that a physical description of sense datum is not).
By stipulation, you would have typed the above sentence regardless of whether or not you were actually conscious, and hence your statement does not provide evidence either for or against the existence of consciousness.
And I do not claim that my statement is evidence that I have qualia.
This exact statement could have been emitted by a p-zombie.
See above. No one is claiming that claims of qualia prove the existence of qualia. People are claiming that the experience of qualia proves the existence of qualia.
In particular, for a piece of knowledge to have epistemic value to me (or anyone else, for that matter), I need to have some way of acquiring that knowledge.
We're not talking about whether a statement has "epistemic value to [you]" or not. We're talking about whether it's epistemically justified or not - whether it's true or not.
There exists a mysterious substance called "consciousness" that does not causally interact with anything in the physical universe.
Neither I nor Chalmers describe consciousness as a substance.
Since this substance does not causally interact with anything in the physical universe, and you are part of the physical universe, said substance does not causally interact with you.
Only if you mean "you" in the reductive physicalist sense, which I don't.
This means, among other things, that when you use your physical fingers to type on your physical keyboard the words, "we are conscious, and know this fact through direct experience of consciousness", the cause of that series of physical actions cannot be the mysterious substance called "consciousness", since (again) that substance is causally inactive. Instead, some other mysterious process in your physical brain is occurring and causing you to type those words, operating completely independently of this mysterious substance.
Of course, although physicalists believe that the exact same "some other mysterious process in your physical brain" causes us to type, they just happen to make the assertion that consciousness is identical to that other process.
Nevertheless, for some reason you appear to expect me to treat the words you type as evidence of this mysterious, causally inactive substance's existence.
As I have stated repeatedly, I don't, and if you'd taken the time to read Chalmers you'd have known this instead of writing an entirely impotent attack on his ideas. Or you could have even read what I wrote. I literally said in the parent comment,
The confusion in your post is grounded in the idea that Chalmers or I would claim that the proof for consciousness is people's claims that they are conscious. We don't (although it could be evidence for it, if we had prior expectations against p-zombie universes which talked about consciousness). The claim is that we know consciousness is real due to our experience of it.
Honestly. How deliberately obtuse could you be to write an entire attack on an idea which I explicitly rejected in the comment to which you replied. Do not waste my time like this in the future.
I claim that it is "conceivable" for there to be a universe whose psychophysical laws are such that only the collection of physical states comprising my brainstates are conscious, and the rest of you are all p-zombies.
Yes. I agree that it is conceivable.
Now then: I claim that by sheer miraculous coincidence, this universe that we are living in possesses the exact psychophysical laws described above (even though there is no way for my body typing this right now to know that), and hence I am the only one in the universe who actually experiences qualia. Also, I would say this even if we didn't live in such a universe.
Sure, and I claim that there is a teapot orbiting the sun. You're just being silly.
Well, first off, I personally think the Zombie World is logically impossible, since I treat consciousness as an emergent phenomenon rather than a mysterious epiphenomenal substance; in other words, I reject the argument's premise: that the Zombie World's existence is "conceivable".
And yet it seems really quite easy to conceive of a p zombie. Merely claiming that consciousness is emergent doesn't change our ability to imagine the presence or absence of the phenomenon.
That being said, if you do accept the Zombie World argument, then there's no reason to believe we live in a universe with any conscious beings.
But clearly we do have such a reason: that we are conscious, and know this fact through direct experience of consciousness.
The confusion in your post is grounded in the idea that Chalmers or I would claim that the proof for consciousness is people's claims that they are conscious. We don't (although it could be evidence for it, if we had prior expectations against p-zombie universes which talked about consciousness). The claim is that we know consciousness is real due to our experience of it. The fact that this knowledge is causally inefficacious does not change its epistemic value.
4 is not a correct summary because consciousness being extra physical doesn't imply epiphenominalism; the argument is specifically against physicalism, so it leaves other forms of dualism and panpsychism on the table.
5 and onwards is not correct, Chalmers does not believe that. Consciousness being nonphysical does not imply a lack of knowledge of it, even if our experience of consciousness is not causally efficacious (though again I note that the p zombie argument doesn't show that consciousness is not causally efficacious, Chalmers just happens to believe that for other reasons).
No part of the zombie argument really makes the claim that people or philosophers are conscious or not, so your analogous reasoning along 5-7 is not a reflection of the argument.
Which seems to suggest that epiphenominalism either begs the question,
Well, they do have arguments for their positions.
or multiplies entities unnecessarily by accepting unjustified intuitions.
It actually seems very intuitive to most people that subjective qualia are different from neurophysical responses. It is the key issue at stake with zombie and knowledge arguments and has made life extremely difficult for physicalists. I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures, and rather than epiphenomenalism multiplying entities unnecessarily, it sure seems to me like physicalism is equivocating entities unnecessarily.
So my original argument disproving p-zombies would seem to be on just as solid footing as the original p-zombie argument itself, modulo our disagreements over wording.
Nothing you said indicates that p-zombies are inconceivable or even impossible. What you, or and EY seem to be saying is that our discussion of consciousness is a posteriori evidence that our consciousness is not epiphenomenal.
In what ways, and for what reasons, did people think that cybersecurity had failed?
Mostly that it's just so hard to keep things secure. Organizations have been trying for decades to ensure security but there are continuous failures and exploits. One person mentioned that one third of exploits take advantage of security systems themselves.
What techniques from cybersecurity were thought to be relevant?
Don't really remember any specifics, but I think formal methods were part of it.
Any idea what Mallah meant by “non-self-centered ontologies”? I am imagining things like CIRL (https://arxiv.org/abs/1606.03137)
I didn't know to be honest.
Can you briefly define (any of) the following terms (or give you best guess what was meant by them)?: meta-machine-learning reflective analysis * knowledge-level redundancy
I remember that knowledge level redundancy involves giving multiple representations of concepts and things to avoid misspecification/misrepresentation of human ideas. So you can define a concept or an object in multiple ways, and then check that a given object fits all those definitions before being certain about its identity.
Flavor is distinctly a phenomenal property and a type of qualia.
It is metaphysically impossible for distinctly physical properties to differ between two objects which are physically identical. We can't properly conceive of a cookie that is physically identical to an Oreo yet contains different chemicals, is more massive or possessive of locomotive powers. Somewhere in our mental model of such an item, there is a contradiction.
Chalmers does believe that consciousness is a direct product of physical states. The dispute is about whether consciousness is identical to physical states.
Chalmers does not believe that p-zombies are possible in the sense that you could make one in the universe. He only believes it's possible that under a different set of psychophysical laws, they could exist.
Yes, this is called qualia inversion and is another common argument against physicalism. There's a detailed discussion of it here: http://plato.stanford.edu/entries/qualia-inverted/
Unlike the other points which I raised above, this one is semantic. When we talk about "knowledge," we are talking about neurophysical responses, or we are talking about subjective qualia, or we are implicitly combining the two together. Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.
is why or if the p-zombie philosopher postulate that other persons have consciousness.
Because consciousness supervenes upon physical states, and other brains have similar physical states.
This argument is not going to win over their heads and hearts. It's clearly written for a reductionist reader, who accepts concepts such as Occam's Razor and knowing-what-a-correct-theory-looks-like.
I would suggest that people who have already studied this issue in depth would have other reasons for rejecting the above blog post. However, you are right that philosophers in general don't use Occam's Razor as a common tool and they don't seem to make assumptions about what a correct theory "looks like."
If conceivability does not imply logical possibility, then even if you can imagine a Zombie world, it does not mean that the Zombie world is logically possible.
Chalmers does not claim that p-zombies are logically possible, he claims that they are metaphysically possible. Chalmers already believes that certain atomic configurations necessarily imply consciousness, by dint of psychophysical laws.
The claim that certain atomic configurations just are consciousness is what the physicalist claims, but that is what is contested by knowledge arguments: we can't really conceive of a way for consciousness to be identical with physical states.
I don't believe that I experience qualia.
Wait, what?
3 doesn't follow from 2, it follows from a contradiction between 1+2.
Well, first of all, 3 isn't a statement, it's saying "consider a world where..." and then asking a question about whether philosophers would talk about consciousness. So I'm not sure what you mean by suggesting that it follows or that it is true.
1 and 2 are not contradictions. Conversely, 1 and 2 are basically saying the exact same thing.
1 states that consciousness has no effect upon matter, and yet it's clear from observation that the concept of subjectivity only follows if consciousness can affect matter,
This is essentially what epiphenomenalists deny, and I'm inclined to say that everyone else should deny it too. Regardless of what the truth of the matter is, surely the mere concept of subjectivity does not rely upon epiphenomenalism being false.
we only have knowledge of subjectivity because we observe it first-hand.
This is confusing the issue; like I said: under the epiphenomenalist viewpoint, the cause of our discussions of consciousness (physical) is different from the justification for our belief in consciousness (subjective). Epiphenomenalists do not deny that we have first-hand experience of subjectivity; they deny that those experiences are causally responsible for our statements about consciousness.
and epiphenomenalism can be discarded using Occam's razor.
There are many criteria by which theories are judged in philosophy, and parsimony is only one of them.
Except the zombie world wouldn't have feelings and consciousness, so your rebuttal doesn't apply.
Nothing in my rebuttal relies on the idea that zombies would have feelings and consciousness. My rebuttal points out that zombies would be motivated by the idea of feelings and consciousness, which is trivially true: humans are motivated by the idea of feelings and consciousness, and zombies behave in the same way that humans do, by definition.
That's an assertion, not an argument.
But it's quite obviously true, because we talk about rich inner lives as the grounding for almost all of our moral thought, and then act accordingly, and because empathy relies on being able to infer rich inner lives among other people. And as noted earlier, whatever behaviorally motivates humans also behaviorally motivates p-zombies.
Indeed. The condensed argument against p-zombies:
I would hope not. 3 is entirely conceivable if we grant 2, so 4 is unsupported, and nothing that EY said supports 4. 5 does not follow from 3 or 4, though it's bundled up in the definition of a p-zombie and follows from 1 and 2 anyway. In any case, 6 does not follow from 5.
What EY is saying is that it's highly implausible for all of our ideas and talk of consciousness to have come to be if subjective consciousness does not play a causal role in our thinking.
Except such discussions would have no motivational impact.
Of course they would - our considerations of other people's feelings and consciousness changes our behavior all the time. And if you knew every detail about the brain, you could give an atomic-level causal account as to why and how.
A "rich inner life" has no relation to any fact in a p-zombies' brain, and so in what way could this term influence their decision process?
The concept of a rich inner life influences decision processes.
Well that's answered by what I said about psychophysical laws and the evolutionary origins of consciousness. What caused us to believe in consciousness is not (necessarily) the same issue as what reasons we have to believe it.
This was longer than it needed to be, and in my opinion, somewhat mistaken.
The zombie argument is not an argument for epiphenomenalism, it's an argument against physicalism. It doesn't assume that interactionist dualism is false, regardless of the fact that Chalmers happens to be an epiphenomenalist.
Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?
Maybe because interactionism violates the laws of physics and is somewhat at odds with everything we (think we) know about cognition. There may be other arguments as well. It has mostly fallen out of favor. I don't know the specific reasons why Chalmers rejects it.
Once you see the collision between the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how you think about consciousness (in any way that affects your internal narrative that you could choose to say out loud), zombie-ism stops being intuitive. It starts requiring you to postulate strange things.
In the epiphenomenalist view, for whatever evolutionary reason, we developed to have discussions and beliefs in rich inner lives. Maybe those thoughts and discussions help us with being altruistic, or maybe they're a necessary part of our own activity. Maybe the illusion of interactionism is necessary for us to have complex cognition and decisionmaking.
Also in the epiphenomenalist view, psychophysical laws relate mental states to neurophysical aspects of our cognition. So for some reason there is a relation between acting/thinking of pain, and mental states which are painful. It's not arbitrary or coincidental because the mental reaction to pain (dislike/avoid) is a mirror of the physical reaction to pain (express dislike/do things to avoid it).
But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.
Chalmers isn't denying that the zombie Chalmers would write that stuff down. He's denying that its beliefs would be justified. Maybe there's a version of me in a parallel universe that doesn't know anything about philosophy but is forced to type certain combinations of letters at gunpoint - that doesn't mean that I don't have reasons to believe the same things about philosophy in this universe.
In fairness, I didn't directly ask any of them about it, and it wasn't really discussed. There could have been some who had read the relevant work, and many who believed it to be reasonable, but just didn't happen to speak up during the presentations or in any of the conversations I was in.
There is no objective absolute morality that exists in a vacuum.
No, that's highly contentious, and even if it's true, it doesn't grant a license to promote any odd utility rule as ideal. The anti-realist also may have reason to prefer a simpler version of morality.
Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, formalized morality has to be like. But they can't and won't pinpoint a single formula in the huge hypothesis space of morality, but we'll always have to rely heavily on our intuitive morality at the end. And this one isn't simple, and can't be made that simple.
There are much more relevant factors in building and choosing moral systems than those mathematical structures, whose relevance to moral epistemology is dubious in the first place.
That's the whole point of the CEV, finding a "better morality", that we would follow if we knew more, were more what we wished we were, but that remains rooted in intuitive morality.
It's not obvious that we would be more likely to believe anything in particular if we knew more and were more what we wished we were. CEV is a nice way of making different people's values and goals fit together, but it makes no sense to propose it as a method of actual moral epistemology.
Would you accept a lottery where there was 1 ticket to maintain your life as a satisfied cookie utility monster and hundreds of trillions of tickets to become a miserable enslaved cookie maker?
Or, after rational reflection and experiencing the alternate possibilities, would you rather prefer a guaranteed life of threshold satisfaction?
The problem is that by doing that you are making your position that much more arbitrary and contrived. It would be better if we could find a moral theory that has solid parsimonious basis, and it would be surprising if the fabric of morality involved complicated formulas.
Thanks. I will give some of those articles a look when I have the chance. However, it isn't true that every activity is competitive in nature. Many projects are cooperative, in which case it's not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn't overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.
So for us to understand what you're even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.
This is very old but I just wanted to say that I am basically considering changing my college choice due to finding out about this research. Thanks so much for putting this post up and spreading awareness.
Maybe I am unfamiliar with the specifics of simulated reality. But I don't understand how it is assumed (or even probable, given Occam's Razor) that if we are simulated then there are copies of us. What is implausible about the possibility that I'm in a simulation and I'm the only instance of me that exists?
Sorry if this has topic has been beaten to death already here. I was wondering if anyone here has seen this paper and has an opinion on it.
The abstract: "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed."
Quite simple, really, but I found it extremely interesting.
http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf
Hi, I've been intermittently lurking here since I started reading HPMOR. So now I joined and the first thing I wanted to bring up is this paper which I read about the possibility that we are living in a simulation. The abstract:
"This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed."
Quite simple, really, but I found it extremely interesting.
http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf
I find myself to have a much clearer and cooler head when it comes to philosophy and debate around the subject. Previously I had a really hard time squaring utilitarianism with the teachings of religion, and I ended up being a total heretic. Now I feel like everything makes sense in a simpler way.