Posts
Comments
You could put strict statistical definitions around it if you wanted, but the general idea is, 'infants grow up to be self-aware adults'.
This may not always be true for exotic species. Plenty of species in nature, for example, reproduce by throwing out millions of eggs / spores/ what have you that only a small fraction of which grow up to be adults. Ideally, any sort of rule you'd come up with should be universal, regardless of the form of intelligence.
At some point, some computer programs would have to be considered to be people and have a right to existence. But at what stage of development would that happen?
As for the first part, I would say that it's fairly common for an individual and a society to not have perfectly identical values or ethical rules. Should I be saying 'morals' for the values of society instead?
I would hope that ethical vegetarians can at least give me the reasons for their boundaries. If they're not eating meat because they don't want animals to suffer, they should be able to define how they draw the line where the capacity to suffer begins.
You do bring up a good point - most psychologists would agree that babies go through a period before they become truly 'self-aware', and I have a great deal of difficulty conceiving of a human society that would advocate 'fresh baby meat' as ethical. Vat-grown human meat, I can see happening eventually. Would you say the weight there more on the side of, 'This being will, given standard development, gain self-awareness', or on the side of 'Other self-aware beings are strongly attached to this being and would suffer emotionally if it died'? The second one seems to be more the way things currently function - farmers remind their kinds not to name the farm animals because they might end up on their plate later. But I think the first one can be more consistently applied, particularly if you have non-human (particularly non-cute) intelligences.
Let's assume society decides that eating meat from animals lacking self-awareness is ethical, and anything with self-awareness is not ethical to eat, and that we have a reliable test to tell the difference. Is it ethical to deliberately breed tasty animals to lack self-awareness, both before or after their species has self-awareness?
My initial reaction to the latter is 'no, it's not ethical, because you would necessarily be using force on self-aware entities as part of the breeding process'. The first part of the question seems to lean towards 'yes', but this response definitely sets off an 'ugh' field in my mind just attempting to consider the possible implications, so I'm not confident at all in my line of reasoning.
Thoughts from others?
It's not so much a matter of disagreement as being able to come up with solid counterexamples that a theoretical 'common person' would agree with.
For instance: If you want to get someone a gift for a birthday, it is a common social convention that the exact gift should be kept a secret from the receiver until their birthday.
As ChristianKI indicated, sometimes you must keep secrets either for social or professional obligations. A good example would be where doctors are required to keep patient records from unauthorized access (by law, no less).
Normally, people dismiss these sorts of arguments with a simple, 'Well, of course except for that.' As we move into the future, however, where technology increases to the point where surveillence is pervasive, is the only privacy we're going to have remaining going to occur in doctor's offices?
I've been thinking about this statement in particular: 'If you've done nothing wrong, you have nothing to hide.' People naturally seem to gravitate to the logical contraposition: If P, then Q. Therefore if !Q, then !P. If you have something to hide, then you MUST have done something wrong. Drawing from this logical statement, they infer that anyone who even tries to hide anything MUST be doing something wrong.
It seems obvious to me, however, that not all people who attempt to hide things have done something wrong. Where is the logical error? Is it in the inversion of 'nothing' and 'something'? It's been a long time since my symbolic logic courses involving the negation of universal quantification.
Based on JMiller's statements regarding 'prerequisites', it implies that he is seeking college-level courses in computer programming, and attempting to pass the classes to get access to the advanced Computer Programming classes in a C.S. degree. As a C.S. major, I can assure you that Calculus is considered a prerequisite to many programming courses. Computer Science is (still!) considered to be primarily a Math degree.
@JMiller: I regret to inform you that RolfAndreassen is correct in most other regards, however. If you want to learn computer programming, do programming. Academic Computer Science is purely about the theory of computers - I managed to achieve a degree in C.S. with less knowledge in how to program computers than when I started, because the entire degree is made up of math theorems stacked up on top of each other. I know how to design a computer from transistors and write a programming language and operating system for it - you might be surprised how seldom that actually comes up in the real world. ;)
If you do want to learn Theory, then by all means, focus on math. If you want to learn Programming, then you find symbolic logic more helpful - my Philosophy 101 courses on symbolic logic are far, far more helpful to me in my programming (even today!) than any of my C.S. courses ever were.
I've seen https://www.khanacademy.org/cs to be a highly valuable resource if you want to learn programming. They've got some very potent innovations there, such as an in-website programming environment. It's very nifty for beginning programmers. I'd recommend checking it out.