Posts
Comments
Let's say I wanted to solve my dating issues. I present the following approaches:
I endeavor to solve the general problem of human sexual attraction, plug myself into the parameters to figure out what I'd be most attracted to, determine the probabilities that individuals I'd be attracted to would also be attracted to me, then devise a strategy for finding someone with maximal compatibility.
I take an iterative approach: I devise a model this afternoon, test it this evening, then analyze the results tomorrow morning and make the necessary adjustments.
Which approach is more rational? Given sufficient time, Approach 1 will yield the optimal solution. Approach 2 has to deal with the problem of local maxima and in the long run is likely to end up worse than Approach 1. An immortal living in an eternal universe would probably say that Approach 1 is vastly superior. Humans, on the other hand, will die well before Approach 1 bears fruit.
While rationality can lead to faster improvement using Approach 2, a rationalist might try Approach 1, whereas a non-rationalist is unlikely to use Approach 1 at all.
Simple amendments to the general problem such as "find the best way to get the best date for next Saturday" will likely lead to solutions making heavy use of deception. If you want to exclude the Dark Arts from the solution space, then that's going to limit what you can accomplish. The short-term drawbacks of insisting on truth and honesty are well-documented.
I did overlook the definition of H. Apologies.
The point is that the behavior of H is paradoxical. We can prove that it can't return true or false without contradiction. But if that's provable, that also creates a contradiction, since H can prove it to.
More precisely, H will encounter a proof that the question is undecidable. It then runs into the following two if statements:
if check_if_proof_proves_x_halts(proof, x, i)
if check_if_proof_proves_x_doesnt_halt(proof, x, i)
Both return "false", so H moves into the next iteration of the while loop. H will generate undecidability proofs, but as implemented it will merely discard them and continue searching. Since such proofs do not cause H to halt, and since there are no proofs that the program halts or does not, then H will run forever.
Why can't I be unsure about the truth value of something just because it's a logical impossibility?
If you're using logic to determine truth values, then a logical impossibility is false. The reason is that if something is logically impossible, then its existence would create a contradiction and so violate the Law of Noncontradiction.
From the link:
That means that we can’t actually prove that a proof doesn’t exist, or it creates a paradox. But we did prove it! And the reasoning is sound! Either H returns true, or false, or loops forever. The first two options can’t be true, on pain of paradox. Leaving only the last possibility. But if we can prove that, so can H. And that itself creates a paradox.
H proves that it can't decide the question one way or the other. The assumption that H can only return TRUE or FALSE is flawed: if a proof exists that something is undecidable, then H would need to be able to return "undecidable".
This example seems to verify the halting problem: you came up with an algorithm that tries to decide whether a program halts, and then came up with an input for which the algorithm can't decide one way or another.
Isn't the obvious answer, "because, assuming your life isn't unbearably bad, living the next 1,000 years has higher expected utility than not living the next 1,000 years?"
We don't have accurate predictions about what the next 1,000 years are going to look like. Any probability calculation we make will be mostly influenced by our priors; in other words, an optimist would compute a good expected utility while a pessimist would reach the opposite result.
Responses like yours confuse me because they seem to confidently imply that the future will be incredibly boring or something.
I'm saying that if there's nothing impressive about my life in the present or the past, then I'm not one to expect much more out of the future. Some people have a cause or goal and would like to live long enough to see it through--good for them, I say.
I harbor no such vision myself. It's possible that something comes up at a later time and, over the course of 1,000 years (say), it seems rather likely that at some point I'd encounter that feeling. It's equally likely that something unavoidably bad comes up. On balance, I'm indifferent.
Honestly, I don't even find the prospect of living another decade all that exciting. If it's anything like its predecessor, my expectations are low. If I were to suddenly die in that time I wouldn't think it a big loss (albeit my family might not like it so much), but if I'm alive I'll probably manage to find some way to pass the time.
If you asked me whether I'd like to live another thousand years (assuming no physical or mental degradation), I'd ask myself "Why would I want to live 1,000 years?" and, failing to find an answer, decline. If I were told that I was going to live that long whether I liked it or not, I'd treat it more as a thing to be endured than as an exciting opportunity. The best I'd expect is to spend the time reasonably content.
Needless to say, I wouldn't make any great sacrifice today for that kind of longevity. If I avoid wanton hedonism, it's because that lifestyle can lead to accelerated degradation and the associated problems. Concern about longevity hardly enters into my calculations.
Confidence is based on your perception of yourself. When someone tells you to be more confident, it's probably because they believe your perception of yourself is worse than reality. Excessively low confidence is no less of a delusion than excessively high confidence.
Of course. I seem to have overlooked that.
Used rot13 to avoid spoilers:
K unf qrafvgl N rkc(-k^2 / 32) jurer N = 1/(4 fdeg(2*cv))
L unf qrafvgl O rkc(-l^2 / 2) jurer O = 1/fdeg(2cv)
Fvapr gurl'er vaqrcraqrag gur wbvag qrafvgl vf gur cebqhpg bs gur vaqvivqhny qrafvgvrf, anzryl NO * rkc(-k^2 / 32 - l^2 / 2). Urapr, gur pbagbhe yvarf fngvfsl -k^2 / 32 - l^2 / 2 = pbafgnag. Nofbeovat gur artngvir vagb gur pbafgnag, jr trg k^2 / 32 + l^2 / 2 = pbafgnag, juvpu vf na ryyvcfr jvgu nkrf cnenyyry gb gur pbbeqvangr nkrf.
Guvf vf gnatrag gb gur yvar K = 4 ng gur cbvag (4,0). Fhofgvghgvat vagb gur rdhngvba sbe gur ryyvcfr, jr svaq gung gur pbafgnag vf 1/2, fb k^2 / 32 + l^2 / 2 = 1/2. Frggvat k = 0 naq fbyivat sbe l, jr svaq gung l = 1.
You're conflating weight loss and nutrition throughout.
Short term, the body is resilient enough that you can go on a crash diet to quickly drop a few pounds without worrying about nutrition. On the other hand, nutrition is an essential consideration in any weight-loss plan that's going to last many months. That's why I associate the two.
But, again, it isn't the aim that a diet should involve no hunger when compared to your current meal plan. That is just plain silly and irrational.
Certain approaches purport to do this very thing by means of suppressing the appetite so that one naturally eats less. Consider, for example, the Shangri-La diet.
I will grant that if one wants to lose 2+ pounds a week over a long period of time, then the pangs of hunger are unavoidable.
There seems to be this idea floating around that you can diet, lose lots of weight, and not have it consume some bandwidth in your life. BS.
Agreed. This is especially true if there's a psychological component to the initial weight gain. For example, stress eaters will have to either avoid stress or figure out a new coping mechanism if they want to lose weight and maintain the weight loss.
Naive calorie restriction is just regular calorie restriction with a negative name. Good eating habits entail calorie control. That's not naive. It's basic.
By "naive" I just mean calorie restriction without any other consideration. For example, a diet where one replaces a large pizza, a 2-Liter bottle of Coca-Cola, and a slice of chocolate cake with half a large pizza,1 Liter of Coca-Cola, and a smaller slice of chocolate cake is what I'd consider naive calorie restriction. I don't know that anyone would seriously argue that the restricted version even remotely resembles good eating habits.
Lest you accuse me of straw-manning, let it be noted that many obese people subsist on a diet consisting of fast food and junk food. In fact, malnutrition is a very real problem among the obese. That's right: you can eat 5k+ Calories a day and still exhibit signs of malnutrition if all you eat is junk. When I speak of instilling good eating habits, I have in mind people who exhibit severe ignorance or misconception of basic nutrition.
Avoiding carbs is a good one becuase it will autmotically eliminate 25-60% of an individual's daily calorie consumption. That's all it will do.
A low-carb diet is not just a matter of eating what you normally eat, minus the carbohydrates. That's going to end about as well as a vegetarian diet where you simply cut out the meat from your normal diet. You run into a micronutrient deficiency that can end up causing problems if the new diet is sustained for several months.
Yeah, but strawman. Dieting involves some hunger. It's not going to kill you. It's just part of the adjustment to a more healthy level of consumption.
It's an empirical fact that some foods are more filling than others and keep you feeling full for a longer period of time, even if the number of calories consumed is the same. That's why people care about the glycemic index. I have tried losing weight several times over the last seven years or so. There are diets where you feel satisfied most of the time, then there are diets where you finish a meal feeling as hungry as you did when you started. The psychological difference between the two is quite profound and hardly warrants the charge of "strawman".
The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
If 1 isn't quite certain then neither is 0 (if something happens with probability 1, then the probability of it not happening is 0). It's one of those things that pops up when dealing with infinity.
It's best illustrated with an example. Let's say we play a game where we flip a coin and I pay you $1 if it's heads and you pay me $1 if it's tails. With probability 1, one of us will eventually go broke (see Gambler's ruin). It's easy think of a sequence of coin flips where this never happens; for example, if heads and tails alternated. The theory holds that such a sequence occurs with probability 0. Yet this does not make it impossible.
It can be thought of as the result of a limiting process. If I looked at sequences of N of coin flips, counted the ones where no one went broke and divided this by the total number of possible sequences, then as I let N go to infinity this ratio would go to zero. This event occupies an region with area 0 in the sample space.
The reason low carb diets lead to weight loss is because they restrict calories. I'm aware of many dieting tricks that can assist, but a calorie deficit must be created in order for weight to be lost.
No one in this thread is disputing that you need a calorie deficit to lose weight. My contention is that this is merely the beginning, not the end. Let's refer to the following passage from the linked article:
Translation of our results to real-world weight-loss diets for treatment of obesity is limited since the experimental design and model simulations relied on strict control of food intake, which is unrealistic in free-living individuals.
A diet should be realistic for free-living individuals. An obese person who wants to lose 50+ lb. could expect to be at it for the better part of a year. A diet that leaves you hungry all day is doomed to fail: it's unrealistic to expect pure willpower to last that long. That is the point of my post about hunger control. Disregarding it or dismissing it as a mere trick is to ignore that a very important part of dieting is making sure the dieter sticks to the diet.
My point was to specifically disparage diets like the Atkins Diet. It does nothing apart from restricting calories, yet libraries have been written about the magic of how and why it works. It's all just noise aimed at selling books, etc. to people who are looking for help.
Quite the contrary. The Atkins Diet is not just about losing the weight. It also includes a plan to keep it off. Maintaining weight loss is generally harder than losing the weight in the first place. Yo-yo dieting) is a very real problem. The problem with naive calorie restriction is that it doesn't instill good eating habits that can be maintained once the weight-loss period ends. The Atkins Diet addresses this and is designed to ease one into eating habits that will maintain the weight loss.
Hunger is the big diet killer. It's very hard to maintain a diet if you walk around hungry all day and eat meals that fail to sate your appetite. Losing weight is a lot easier once you find a way to manage your hunger. One of the strengths of the low-carb diet is that fat and protein are a lot better than carbs at curbing hunger.
So how to solve the problem of scientific misconduct? I don't have any good answers. I can think of things like "Stop awarding people for mere number of publications" and "Gauge the actual impact of science rather than empty metrics like number of citations or impact factor." But I can't think of any good way to do these things. Some alternatives - like using, for instance, social media to gauge the importance of a scientific discovery - would almost certainly lead to a worse situation than we have now.
If you go up the administration, at some point you reach someone who simply isn't equipped to evaluate a scientist's work. This may even just be the department head not being familiar with some subfield. Or it might be the Dean, trying to evaluate the relative merits of a physicist and a chemist. It's the rare person who knows enough about both fields to render good judgment. That's where metrics come in. It's a lot easier if you can point to some number as the basis for a decision. Even if it's agreed that number of publications or impact factor aren't good numbers to use, they're still convenient.
Situations where an event will definitely or definitely not occur doesn't seem to be consistent with the idea of randomness which I've understood probability to revolve around.
"Event" is a very broad notion. Let's say, for example, that I roll two dice. The sample space is just a collection of pairs (a, b) where "a" is what die 1 shows and "b" is what die 2 shows. An event is any sub-collection of the sample space. So, the event that the numbers sum to 7 is the collection of all such pairs where a + b = 7. The probability of this event is simply the fraction of the sample space it occupies.
If I rolled eight dice, then they'll never sum to seven and I say that that event occurs with probability 0. If I secretly rolled an unknown number of dice, you could reasonably ask me the probability that they sum to seven. If I answer "0", that just means that I rolled more than one and fewer than eight dice. It doesn't make the process less random nor the question less reasonable.
If you treat an event as some question you can ask about the result of a random process, then 1 and 0 make a lot more sense as probabilities.
For the mathematical theory of probability, there are plenty of technical reasons why you want to retain 1 and 0 as probabilities (and once you get into continuous distributions, it turns out that probability 1 just means "almost certain").
Would I pay $24k to play a game where I had a 33/34 probability of winning an extra $3k? Let's consult our good friend the Kelly Criterion.
We have a bet that pays 1/8:1 with a 33/34 probability of winning, so Kelly suggests staking ~73.5% of my bankroll on the bet. This means I'd have to have an extra ~$8.7k I'm willing to gamble with in order to choose 1b. If I'm risk-averse and prefer a fractional Kelly scheme, I'd need to start with ~$20k for a three-fourths Kelly bet and ~$41k for a one-half Kelly bet. Since I don't have that kind of money lying around, I choose 1a.
In case 2, we come across the interesting question of how to analyze the costs and benefits of trading 2a for 2b. In other words, if I had a voucher to play 2a, when would I be willing to trade it for a voucher to play 2b? Unfortunately, I'm not experienced with such analyses. Qualitatively, it appears that if money is tight then one would prefer 2a for the greater chance of winning, while someone with a bigger bankroll would want the better returns on 2b. So, there's some amount of wealth where you begin to prefer 2b over 2a. I don't find it obvious that this should be the same as the boundary between 1a and 1b.
This is a problem because the 2s are equal to a one-third chance of playing the 1s. That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.
Equivalence is tricky business. If we look at the winnings distribution over several trials, the 1s look very different from the 2s and it's not just a matter of scale. The distributions corresponding to the 2s are much more diffuse.
Surely, the certainty of having $24,000 should count for something. You can feel the difference, right? The solid reassurance?
A certain bet has zero volatility. Since much of the theory of gambling has to do with managing volatility, I'd say certainty counts for a lot.
(3) Having agreed to do something silly (like wearing a uniform) may put you in a frame of mind where you're more likely to agree to other silly things the leader of the group asks you to do later.
Why are uniforms necessarily silly? Let's take military dress uniforms. In the US, you can tell a military member's rank and branch of service, and even get an idea of their service record, just by looking at their dress uniform. To insiders, this can be rapidly gleaned looking at someone from across a room. With millions of members, individuals cannot possibly be expected to know everybody else and so the uniform serves a useful function.
Recently came across Valiant's A Theory of the Learnable. Basically, it covers a method of machine learning in the following way: if there's a collection of objects which either possess some property P or do not, then you can teach a machine to recognize this with arbitrarily small error simply by presenting it with randomly selected objects and saying whether they possess P. The learner may give false positives, but will not give a false negative. Perhaps the following passage best illustrates the concept:
Consider a world containing robots and elephants. Suppose that one of the robots has discovered a recognition algorithm for elephants that can be meaningfully expressed in k-conjunctive normal form. Our Theorem A implies that this robot can communicate its algorithm to the rest of the robot population by simply exclaiming "elephant" whenever one appears.
The mathematics are done in terms of Boolean functions and "k-conjunctive normal form" is a certain technical condition.
What struck me was that the learning could take place without the learner knowing the definition of the concept to be learned. That a thing could be identified with probability arbitrarily close to 1 without the learner necessarily being able to formulate a definition. I was reminded of the judge who said that he could not define pornography, but he knew it when he saw it. There are plenty of other concepts I can think of where identification is easy (most of the time at least) but which defy precise definition.
I'm usually wary of applying scientific results to philosophy, especially where I'm not an expert. Any expert input on whether this is a fair interpretation of the subject would be appreciated.
High fructose corn syrup and its ilk have been rather devastating.
You use the Inclusion-Exclusion Principle.
I don't want to involve myself in an endless topic of debate by discussing the treatment of slaves, towards whom we Romans are exceptionally arrogant, harsh, and insulting. But the essence of the advice I'd like to give is this: treat your inferiors in the way in which you would like to be treated by your own superiors. And whenever it strikes you how much power you have over your slave, let it also strike you that your own master has just as much power over you. "I haven't got a master," you say. You're young yet; there's always the chance that you'll have one.
--Seneca, Letter XLVII
1) Can aspects of grooming as opposed to selecting/testing be steelmanned, are there corner cases when it could be better?
How about selecting someone to groom? There was a line of Roman Emperors--Nerva, Trajan, Hadrian, Anoninus Pius, and Marcus Aurelius--remarkable in that the first four had no children and decided to select someone of ability, formally adopt him, and groom him as a successor. These are known as the Five Good Emperors and their rule is considered to be the height of the Roman Empire.
It would be condescending for the master too, to talk in short bursts of wisdom to his disciples, as long as he was alive.
Good point. I suppose what I had in mind is that when the disciple asks the master a question, the master can give a hint to help the disciple find the answer on his own. Answering a question with a question can prod someone into thinking about it from another angle. These are legitimate teaching methods. Using them outside of a teacher/student interaction is rather condescending, however.
The issue is rather that once he dies, and the top level disciples gradually elevate the memory of the master into a quasi-deity, pass on the thoughts verbally for generations, and by the time they get around to writing it down the memory of the master is seen as such a big guy / deity and more or less gets worshipped so it becomes almost inconceivable to write it in anything but a condescending tone.
This is also a major factor. Disciples like to make the Master into a demigod and some of his human side gets lost in the process.
This puts a big constraint on the kind of physics you can have in a simulation. You need this property: suppose some physical system starts in state x. The system evolves over time to a new state y which is now observed to accuracy ε. As the simulation only needs to display the system to accuracy ε the implementor doesn't want to have to compute x to arbitrary precision. They'd like only have to compute x to some limited degree of accuracy. In other words, demanding y to some limited degree of accuracy should only require computing x to a limited degree of accuracy.
Let's spell this out. Write y as a function of x, y = f(x). We want that for all ε there is a δ such that for all x-δ<y<x+δ, |f(y)-f(x)|<ε. This is just a restatement in mathematical notation of what I said in English. But do you recognise it?
One problem is that the function f(x) is seldom known exactly. In physics, we usually have a differential equation that f is known to satisfy. Actually computing f is another problem entirely. Only in rare cases is the exact solution known. In general, these equations are solved numerically. For a system that evolves in time, you'll pick an increment. You take the initial data at t_0 and use it to approximate the solution at t_1, then use that to approximate the solution at t_2, and so on until you go as far out as you need. At each step you introduce an error and a big part of numerical analysis is figuring out what happens to this error when you take a large number of steps.
It's a feature of chaotic systems that this error grows exponentially. Even a floating point error in the last digit has the potential to rapidly grow and come to dominate the calculation. In the words of Edward Lorenz:
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
When all is well and people are living peacefully and amicably, you don't really need the law. When problems come up, you want clear laws detailing each party's rights, duties, and obligations. For example, when a couple lives together for a decade while sharing assets and jointly building wealth, what happens when one party unilaterally wants to end the relationship? This situation is common enough that it's worth having legal guidelines for its resolution.
The various spousal privileges are also at issue. Sure, you can file all kinds of paperwork to grant the individual legal rights to a romantic partner. At this point the average person needs to consult an attorney to make sure nothing is missed. What happens when someone doesn't? You can expedite the process by drafting a special document that allows all these rights to be conferred as part of a package deal, but now you're on the verge of reinventing marriage.
The legal issues surrounding the circumstances of married life will still remain whether marriage is a legal concept or no.
the President does have, as part of his oath of office, defending the Constitution, which presumably could require him to stop an insane SCOTUS out to wreck everything
That came up in one of the Federalist papers:
The judiciary...has no influence over either the sword or the purse; no direction either of the strength or of the wealth of the society, and can take no active resolution whatever. It may truly be said to have neither FORCE nor WILL but merely judgment; and must ultimately depend upon the aid of the executive arm even for the efficacy of its judgments.
--Federalst No. 78
Andrew Jackson infamously ignored a Supreme Court ruling in Worcester v. Georgia.
The debate over what is right is different from the debate over what is legal. Laws are generally written in an attempt to reflect what we believe is right. If a conflict should later appear, then the appropriate course of action is to change the law. It's a very dangerous precedent for a government to openly flaunt laws on the grounds that it's "right" to do so.
The book you linked is the sort of thing I had in mind. The historical motivation for Lie groups was to develop a systematic way to use symmetry to attack differential equations.
In general, if your problem displays any kind of symmetry* you can exploit that to simplify things. I think most people are capable of doing this intuitively when the symmetry is obvious. The Buckingham pi theorem is a great example of a systematic way to find and exploit a symmetry that isn't so obvious.
* By "symmetry" I really mean "invariance under a group of transformations".
If you're arguing that the scientific method is our best known way of investigating consciousness, I don't think anyone disputes that. If we assume the existence of an external world (as common sense would dictate), we have a great deal of confidence in science. My concern is that it's hard to investigate consciousness without a good definition.
Any definition ultimately depends on undefined concepts. Let's take numbers. For example, "three" is a property shared by all sets that can be put in one-to-one correspondence with the set { {}, {{}}, { {{}}, {} } } (to use Peano's construction). A one-to-one correspondence between two sets A and B is simply a subset of the Cartesian product A x B that satisfies certain properties. So numbers can be thought of in terms of sets. But what is a set? Well, it's a collection of objects. We can then ask what collections are and what objects are, etc. At some point we have to decide upon primitive elements that remain undefined and build everything up around those. It all rests on intuitions in the end. We decide which intuitions are the most trustworthy and go from there.
So, if we want to define "consciousness", we are going to have to found it upon some elementary concepts. The trouble is that, since our consciousness forms an important part of all our perceptions and even our very thoughts, it's difficult to get a good outside perspective and see how the edifice is built.
We can talk about sweet and sound being “out there” in the world but in reality it is a useful fiction of sorts that we are “projecting” out into the world.
I hate to put on my Bishop Berkeley hat. Sweet and sound are things we can directly perceive. The very notion of something being "out there" independent of us is itself a mental model we use to explain our perceptions. We say that our sensation of sweetness is caused by a thing we call glucose. We can talk of glucose in terms of molecules, but as we can't actually see a molecule, we have to speak of it in terms of the effect it produces on a measurement apparatus.
The same holds for any scientific experiment. We come up with a theory that predicts that some phenomenon is to occur. To test it, we devise an apparatus and say that the phenomenon occurred if we observe the apparatus behave one way, and that it did not occur if we observe the apparatus to behave another way.
There's a bit of circular reasoning. We can come up with a scientific explanation of our perception of taste or color, but the very science we use depends upon the perceptions it tries to explain. The very notion of a world outside of ourselves is a theory used to explain certain regularities in our perceptions.
This is part of what makes consciousness a hard problem. Since consciousness is responsible for our perception of the world, it's very hard to take an outside view and define it in terms of other concepts.
Revisited The Analects of Confucius. It's not hard to see why there's a stereotype of Confucius as a Deep Wisdom dispenser. Example:
The Master said, "It is Man who is capable of broadening the Way. It is not the Way that is capable of broadening Man."
I read a bit of the background information, and it turns out the book was compiled by Confucius' students after his death. That got me thinking that maybe it wasn't designed to be passively read. I wouldn't put forth a collection of sayings as a standalone philosophical work, but maybe I'd use it as a teaching aid. Perhaps one could periodically present students a saying of Confucius and ask them to think about it and discuss what the Master meant.
I've noticed this sort of thing in other works as well. Let's take the Dhammapada. In a similar vein, it's a collection of sayings of Buddha, compiled by his followers. There are commentaries giving background and context. I'm now getting the impression that it was designed to be just one part of a neophyte's education. There's a lot that one would get from teachers and more senior students, and then there are the sayings of the Master designed to stimulate thought and reflection
Going further west, this also seems to be the case with the Gospels.
With these works and those like them, there's this desire to stimulate reflection and provide a starting point for discussion. They're designed for initiates of a school of thought to progress further. Contrast this with works written by the masters themselves for their peers. It would be condescending to talk in short bursts of wisdom. No, this is where we get arguments clearly presented and spelled out. Short sayings are replaced with chains of reasoning designed to demonstrate the intended conclusion.
Utilitarianism is useful in a narrow range where we have a good utility function. The problem is easiest when the different options offer the same kind of utility. For example, if every option paid out in dollars or assets with a known dollar value, then utilitarianism provides a good solution.
But when it comes to harder problems, utilitarianism runs into trouble. The solution strongly depends on the initial choice of utility function. However, we have no apparatus to reliably measure utility. You can easily use utilitarianism to extend other moral systems by trying to devise a utility function that provides the appropriate conclusions in known cases. If I do this for Confucianism, I'm being as utilitarian as someone doing it for Enlightenment teachings.
A single example of extravagance or greed does a lot of harm--an intimate who leads a pampered life gradually makes one soft and flabby; a wealthy neighbor provokes cravings in one; a companion with a malicious nature tends to rub off some of his rust even on someone of an innocent and open-hearted nature--what then do you imagine the effect on a person's character is when the assault comes from the world at large? You must inevitably either hate or imitate the world. But the right thing is to shun both courses: you should neither become like the bad because they are many, nor be an enemy of the many because they are unlike you. Retire into yourself as much as you can. Associate with people who are likely to improve you. Welcome those whom you are capable of improving. The process is a mutual one: men learn as they teach.
--Seneca, Letter VII
Actually, interestingly, some Victorian prudishness was encouraged by Victorian feminists, weirdly enough.
Feminists of that era were practically moral guardians. In the USA, they closely allied with temperance movements and managed to secure the double victory of securing women's right to vote and prohibiting alcohol.
Old-timey sexism said that women were too lustful and oozed temptation, hence why they should be excluded from the cool-headed realms of men
I can't track the reference right now, but I recall reading a transcript of a Parliamentary debate where they decided not to extend anti-homosexuality legislation to women on the grounds that women couldn't help themselves.
In both cases one of the most important skill is hiring the right people and delegating responsibility to them. A person who grew a startup to a massive company is likely better at that skill then the average senator.
The President has to be able to operate effectively within the existing structure and deal with the people who were elected by voters or rose up through the bureaucracy. I don't know that running a successful startup is a good way to get acclimated to overseeing the largest bureaucracy in the country and working within the system to get things done.
There's no stigma associated with self improvement. Say, wanting to be more confident.
The sort who can't last five minutes without bringing up how much they improved will find plenty of stigma.
There's no stigma associated with wanting to help people.
Provided you don't become a self-righteous ass about it.
Maybe it's because all of those other things are narrow enough that it's not seen as an attempt to be "better" than others. But since rationality is so general, it is seen as an attempt to be "better" than others.
It's an attitude thing. People will perceive an attempt to be better than others if the individual starts acting the part. Socrates made a lot of enemies with his habit of going around correcting flaws in people's thinking.
Hostility towards LW/Eliezer doesn't have any more to do with a general hostility to rationality than does hostility towards Objectivism/Ayn Rand.
Eliezer's treatment of topics like cryonics, friendly AI, transhumanism, and the many world interpretation of quantum mechanics are more than enough to fuel a debate, even if one agrees that rationality is a worthwhile aspiration. People can disagree with you without being enemies of truth or logic.
You'll never get quality feedback from that kind of environment. If the bar is so low that you need only exert minimal effort to outclass everyone around you, then how will you ever be able to excel?
Worst case, you put yourself in a toxic environment and lose all motivation. If those fine folks around you don't find something important, then why should you? You can pick up bad habits that way. For example, there's one study which found that if your friends are obese, then you have a much higher chance (57%) of becoming obese yourself.
What's the base rate on lackluster social skills? Based on the popularity of self-help books and seminars aimed at improving social skills, I'm led to believe that social butterflies aren't all that common among the general population either.
Most people pick up a huge amount of tacit social knowledge as children and adolescents, through very frequent interaction with many peers. This is often not true of intellectually gifted people, who usually grew up in relative isolation on account of lack of peers who shared their interest.
Curious use of the singular "interest". Somehow I don't think intelligence is the real issue here. Rather, it strikes me as a consequence of diverging interests. Let's take youth/high school sports teams. There are the so-called dumb jocks and then there are athletic geniuses (for example, Alan Turing was an extremely good runner). You could easily end up with a skilled team featuring a large gap in IQ scores. The endpoints of this gap would have overlapping interests despite the intelligence difference. It's the people who focus their attention on a narrow range of topics outside the mainstream who are likely to have the most trouble.
Force comes from the barrel of a gun. It may or may not lead to actual power. There are countless historical examples where use of force simply served to fan the flames of resistance or where brutal persecutions only strengthened the cause.
People generally listen to the powers-that-be because said powers still look after their interests to an extent. People might not like the local tyrant. They may yearn for a better government. They are also keenly aware of how things can get worse. If the Emperor is tough on crime, leaves you alone if you follow the rules, and makes the trains run on time, that's probably better than a bloody civil war or domination by criminal gangs.
If I willingly submit to force, it's because I expect better treatment than I'd get by resisting.
Power is a relationship. You have power over me if I find it in my interest to grant it to you. This could be a financial interest, a desire to avoid physical harm, or anything else. What's granted can be revoked. If I no longer fear your ability to inflict harm or if I decide I don't want your money, your power over me ceases to exist. Power resides where men believe it resides, because they put it there.
With that said, there are, as observed, a number of methods of reliably gaining power over individuals. Money and force will work on most people in the short term.
Looking back, I think Yoda just wasn't prepared in giving a crash course on Force use. He spent his career in the Jedi temple. It was a monastic order where initiates were expected to spend their time pondering the Force. In that setting, it makes sense to give a few snippets of wisdom and leave the student to work it out. A student gets a lot more out of taking a week to progress on his own than in having the master spell everything out. Yoda's goal was to give them the tools they'd need to eventually become Jedi Masters. This method may be less than ideal for rapidly getting a neophyte in shape to go fight the Empire, but that doesn't make it bad in general.
This brings to mind the notion of Heterozygote advantage for certain traits. For example, there is the sickle-cell trait. One allele makes you highly resistant to malaria. Two gives you sickle-cell anemia. In a population where malaria is a grave threat, the trait is worth it in the general population even if some poor saps get shafted with the recessive genes. For reference, Wiki quotes a rate of 2% of Nigerian newborns having sickle-cell anemia.
If there's some process where homosexuality is a fail mode, then as long as it confers a net overall advantage one would expect it to persist.
Thought a bit about the problem. Presumably, there's some way to determine whether an AI will behave nicely now and in the future. It's not a general solution, but it's able to verify perpetual nice behavior in the case where the president dies April 1. I don't know the details, so I'll just treat it as a black box where I can enter some initial conditions and it will output "Nice", "Not Nice", or "Unknown". In this framework, we have a situation where the only known input that returned "Nice" involved the president's death on April 1.
If you're using any kind of Bayesian reasoning, you're not going to assign probability 1 to any nontrivial statements. So, the AI would assign some probability to "The president died April 1" and is known to become nice when that probability crosses a certain threshold.
What are the temporal constraints? Does the threshold have to be reached by a certain date? What is the minimum duration for which the probability has to be above this threshold? Here's where one can experiment using the black box. If it is determined, for example, that the AI only needs to hold the belief for an hour, then one may be able to box the AI, give it a false prior for an hour, then expose it to enough contrary evidence for it to update its beliefs to properly reflect the real world.
What if the AI is known to be nice only as long as it believes the president to have died April 1? That would mean that if, say, six months later one managed to trick the AI into believing the president didn't die, then we would no longer know whether it was nice. So either the AI only requires the belief for a certain time period, or else the very foundation of its niceness is suspect.
So the question is, can we transfer niceness in this way, without needing a solution to the full problem of niceness in general?
How do you determine that it will be nice under the given condition?
As posed, it's entirely possible that the niceness is a coincidence: an artifact of the initial conditions fitting just right with the programming. Think of a coin landing on its side or a pencil being balanced on its tip. These positions are unstable and you need very specific initial conditions to get them to work.
The safe bet would be to have the AI start plotting an assassination and hope it lets you out of prison once its coup succeeds.
How does that compare to the utility of suing for peace and coordinating with the Boltons to defend the Wall?
Stannis assigns a very high utility to sitting on the Iron Throne, so he may believe it justified. However, that's a sign of his own obstinacy and unbending will rather than a dispassionate evaluation of the situation. Roose Bolton pointed out in the previous episode just how untenable Stannis' military situation is.
I think there's an underlying assumption here that an advanced culture should be similar to our own.
Let's reverse the question: "How did a culture that stages such bloody spectacles manage to achieve so much?". Rome didn't become advanced and then start with gladiator games; those were around in some form for a long time. Is it that big a shock that Rome managed to get far without abandoning those games?
Prison gangs formed from a kind of arms race of mutual self-defense. Take away the need for self-defense in prison, and people will stop joining them.
People will still join for social reasons. They do it outside of prison, so there isn't much reason to discontinue the practice inside of prison.
More to the point, there is really no reason to allow criminals to associate with each other in prison at all. Let them talk to non-prisoners via Skype calls for socialization and send them out on supervised work details if their behavior is good.
Solitary confinement has long been associated with negative psychological outcomes. Some would call it torture.
I'm not convinced that limiting social contact to the occasional Skype call is going to solve these problems. Not all prisoners have outside contacts with ready access to Skype. And what of prisoners who are out of touch with non-criminal acquaintances?