Posts
Comments
Low IQ is not fixable by practice
I don't believe you, and I'm especially skeptical of IQ—and a lot of other fetishizations of overly confident attempts to exactly quantify hugely abstract and fluffy concepts like intelligence.
I'm not sure where you're from, or what the composition of your social circle is, Lumifer—but I think you should find as many people as you can (or use whatever reasonable metric you have for determining a "normal person") and say: "Being stupid is a disease. The first step to destigmatizing this disease is to stop making fun of stupid people; I too am guilty of this," and then observe the reaction you get.
Personally, I'm baffled as to how you could think that this wouldn't engender a negative response from someone who's never been on LW before.
That being said, simply changing the theme from "anti-stupidity" to "pro-intelligence" would change the post dramatically.
Or a resistance and no definitive establishment
I don't think you're missing anything, no.
Ah, good points.
I did not really know what was meant by "collectivist ideaologies" and assumed it to be something along the lines of "ideaologies that necssitate a collection of people." Originally, I didn't see any significance of the 50% (to me, it just seemed like an off-the-cuff number), but you put it into some good context.
I concede and retract my original criticism
Does that really seem like a political post to you, though? It doesn't look like an attempt to discuss politics, types of politics, who's right and who's wrong, there's no tribalism, nothing regarding contemporary politics, etc. It looks like a pure and simple statement of fact: Humans have been coercing other humans into doing specific actions—often times empowering themselves—for the whole of human history.
I don't think tukabel's post was very political outside of the statement "An AI doing this is effectively politics, and politics has existed for a long time." I don't think that's considered discussing politics.
I don't think this is a pertinent or useful suggestion. The point of the reply wasn't to discuss politics, and I think it's a red herring to dismiss it as if it were.
If I may expand on tukabel's response: What is the point of this post? It seems to be some sort of "new" analysis as to how AIs could potentially hack humans—but if we get passed the "this is new and interesting" presentation, it doesn't seem to give anything new, unusual, or even really discussion-worthy.
Why is "The AI convinces/tricks/forces the human to do a specific action" something that's remarkable? What does "Different levels of hacking make different systems vulnerable, and different levels of interaction make different types of hacking more or less likely" even mean? It sounds like an overly verbose and convoluted way of saying "People respond differently to different things."
Maybe there's some background discussion that provides a context I've been missing here, but this doesn't seem like anything that hasn't already been going on amongst humans for thousands of years, and that's a relevant and useful thing to draw from.
Much more succinct: Why are the ways in which an AI can "hack" a human (i.e. - affect them) any different than the ways a human can affect a human? If we replace "AI" with "Human" it'd be trivial and a bit silly.
This is a bit tangential, and a bit ranty, maybe a bit out of line, but it might help [a bit]...
From one self-hater to another: I've always been negative. I've always disliked myself, my past decisions, the world around me, and the decisions made therein. Here's the kind of philosophy I've embraced over the past few years:
My pessimism motivates me something like the way nihilism motivates Nietzsche. It is the ultimate freedom. I'm not weighed down by this oppressive sense that I'm missing some great opportunity or taking an otherwise good life and shitting on it. Why? Because I suck, the human condition sucks, and life sucks—so I might as well fucking do whatever I've got to do to get to wherever I want to go. I'm probably not going to get there, but I'll be damned if I don't die trying.
I've tried a lot of different things to try to absolve myself of this kind of inherent, long-standing negativity, but it's the wrong way to go about it. This is the way I am, and I'm pretty ok with that. I feel like when I embraced this, it was cathartic, a little like someone discovering a repressed memory. I've come out of the pessemist's closet. ;)
Things like this journaling method are good—it's good to be explicit about what you're thankful for, it's good to act in ways that maximize your ability to do things, and maybe, after some time, that negativity will go away (or, at least, the negative part of said negativity). But, you don't need self-esteem, and motivation is a farce; it's what people sit around waiting for some translucent muse to inspire them, telling themselves they need "motivation" to do what they've got to do. The thing to be weary of is not turning your negativity into a force that oppresses you.
I think I'm on the track to doing important things (relatively "late" in life [compared to my peers], but w/e), and here's how I see myself: Like Arnold Schwarzenegger in the end of Terminator 2, except instead of lava, it's sewage—and instead of a thumbs-up, it's my middle finger.
I've been around in LW for years, and I'd say it's tended more towards refining the art of pragmatism than rationality (though there's a good bit of intersection there).
There's a lot of nuance to this situation that makes a black-and-white answer difficult, but let's start with the word arrogance. I think the term carries with it a connotation of too much pride; something like when one oversteps the limits of one's domain. For example, the professor saying "You are probably wrong about this" is an entirely different statement (in terms of arrogance) than the enthusiast saying "You are probably wrong about this," because this is a judgement that the professor is well qualified to make. While I can see a person not liking this, I don't think this kind of straight-forwardness is wrong, or arrogant.
The Professor
When I think of an arrogant professor, I don't simply think of a person whose knowledge of a field far superceeds my own. I think of something more than that. I think of a person who seems to have an inflated sense of self-worth because of their knowledge. That is, I think of a person who is overstepping the bounds of their domain; a professor who not only says "You're wrong, and don't have an even basic understanding of quantum mechanics," but does it with an air of regal superiority that implicitly says "Not only are you wrong, but you are not, as a person, worth taking seriously or teaching because you are wrong." That kind of professor is taking the expertise he has in quantum physics and applying it to something well outside quantum physics (in this case, a person's worth). If this is the kind of professor you were describing, then I'd certainly say there's arrogance here. Otherwise, I don't think the professor is being arrogant by saying something like "You don't possess even a basic understanding of the core concepts of quantum physics." Admittedly, a more constructive way to go about this would be to at least show why this is true. As someone who does have an understanding of quantum physics, there's a lot of material that one could easily show an enthusiast that'd be well beyond their current knowledge.
A bit of nuance: Using this same standard for arrogance, the real answer is something like "it depends on what they were talking about." For example, some philosophical theories that emerge from quantum mechanics can be understood and thought about by the laymen (key word: some; many arise from a misunderstanding of quantum mechanics, and thus can easily be debunked with knowledge about quantum mechanics). If the professor and enthusiast were talking about one of these theories, then the professor dismissing this legitimate theory would, in fact, be arrogant (because it would be a step outside of the professor's bounds).
In the case that the professor doesn't admit to being a professor, indeed it would seem arrogant—but only because it seems as if the incognito professor is overstepping his bounds. That is, since no one knows the professor is a professor, there's no reason to assume quantum physics is within his area of expertise. (This may also be somewhat pedantic, but in something like quantum physics, because of this gap in knowledge, it'd be very obvious who the professor was to an audience that doesn't know quantum physics, even if it wasn't made explicitely clear beforehand.)
The Enthusiast
We can apply the same standards of arrogance to the enthusiast. I've seen many people who are simultaneously engaging, calm, polite and arrogant. If a professor, someone whose studied the field very rigorously, with mathematics that would take the enthusiast years to learn, claims that the enthusiast doesn't have a working knowledge of quantum physics—then it's probably true. If the professor says "If you knew enough about quantum physics, you'd see why this idea is strictly wrong," and the enthusiast is still convinced he's correct, then I'd say the enthusiast is stepping outside of his bounds (i.e. - is being arrogant). Again, admittedly, if the professor doesn't explain why the enthusiast is wrong, it's difficult for the enthusiast to gauge whether or not whatever is being spoken of is a legitimate theory or one that dissolves with sufficient knowledge of quantum mechanics.
A bit of nuance: It's easy with quantum mechanics, since so much of it is based in physics and mathematics in ways where the metric for correct or incorrect are fairly straight-forward and clear. If the professor were an economist, the lines defining what the bounds are could be much blurrier.
In the case that the professor doesn't admit to being a professor, the enthusiast might not have a way of knowing whether or not bounds are being overstepped, and the bystanders would probably see the professor as being arrogant. Though, I reiterate: Especially in something like quantum physics, the party who has the expertise would be apparent.
To come back to your distinction between over-confidence and dismissive behavior, I think these things are both addressed by considering arrogance to be synonymous with an overstepping of one's bounds. Dismissive behavior is relevant only insofar as it implies this overstepping. For example, does it indicate that the professor thinks less of the enthusiast as a person? Maybe the professor is being dismissive because the enthusiast's vehemence is blinding? Similarly, over-confidence is relevant only insofar as it implies an overstepping of one's bounds (this is a bit easier, because "over-confidence" is almost synonymous with "stepping over the bounds of your knowledge-limitations.")
Kind-of Disclaimer:
1) I might be underestimating the amount of knowledge you intended for your enthusiast. In my experience (I am not a professor), I've never met a physics-enthusiast who has a working knowledge of the actual physics with which they are enthused—and the types of physics that people find most interesting are usually the ones that require the most knowledge (i.e. - string theory, quantum mechanics, etc.; few non-physicists are that enthused with Newtonian Mechanics!).
2) "Overstepping one's bounds" might be a bad term. I'm willing to use a better one, it's just the one that came to mind. I hope it's clear what I mean from the context.
Huh. Actually, I enjoyed reading it.
They're equally likely, but, unless Alice chose 1649271 specifically, I'm not quite sure what that question is supposed to show me, or how it relates to what I mentioned above.
Maybe let me put it this way: We play a dice game; if I roll 3, I win some of your money. If you roll an even number, you win some of my money. Whenever I roll, I roll a 3, always. Do you keep playing (because my chances of rolling 3-3-3-3-3-3 are exactly the same as my chances of rolling 1-3-4-2-5-6, or any other specific 6-numbered sequence) or do you quit?
I agree with you that the probability of Alice's sequence being a sequence will always be the same, but the reason Alice's correct prediction is a difference in the two mentioned situations is because the probability of her randomly guessing correctly is so low—and may indicate something about Alice and her actions (that is, given a complete set of information regarding Alice, the probability of her correctly guessing the sequence of coin flips might be much higher).
Am I misunderstanding the point you're making w/ this example?
I do not think these events are equally improbable (thus, equally probable).
The specific sequence, M, is some sequence in the space of all possible sequences; "... achieves some specific sequence M" is like saying "there exists an M in the space of all sequences such that N = M." That will always be true—that is, one can always retroactively say "Alice's end-result is some specific sequence."
On the other hand, it's a totally different thing to say "Alice's end-result is some specific sequence which she herself picked out before flipping the coin."
But if I'm not mistaken the original argument around Chesterton's fence is that somebody had gone through great efforts to put a fence somewhere, and presumably would not have wasted that time if it would be useless anyway.
My response was to this statement—specifically, toward the assumption that, since someone has gone through great efforts to put a fence somewhere, it's ok to assume said fence isn't useless. I'm not seeing where my comment is inconsistent with what it's responding to (that is, I'm seeing "gone through great efforts" as synonymous with "worked hard.")
I was about to say that every time I've read of Chesterton's Fence, it seems silly, but then I decided to read Wikipedia's take on it (I do love me some Wikipedia), and came across this:
If you're considering nominating something for deletion because it doesn't appear to have any use or purpose, research its history first. You may find out why it was created, and perhaps understand that it still serves a purpose. Or if you do feel the issue it addressed is no longer valid, frame your argument for deletion.
This, to me, seems like an obvious good idea—and it also seems independent of what TheMajor was saying. My initial qualm came from the claim of why something might have unknown use (i.e. - someone "presumably would not have wasted that time if it would be useless anyway"). I don't believe this to be true, or a good thing to assume, anymore than assuming something that didn't take a large amount of effort is useless.
On the other hand, "Find out why something is in place before commenting on it, regardless of how much effort was put into it" seems much more reasonable.
[IGNORE THE ABOVE COMMENT (I don't know how to strikeout)]
Lumifer, I've interpreted your comment within the context of your implying TheMajor's statement is correct. When I think about it more, I don't think that's what you intended—and, in fact, probably intended the opposite.
Am I correct?
Every time I read about Chesterton's fence, it seems like the implication is:
Because someone worked hard on something, or because a practice/custom took a long time to develop, it has a greater chance of being correct, useful, or beneficial [than someone's replacement who looks and says "This doesn't make sense"]
I think that's a terrible statement.
In my experience, that's not what usually happens.
Where are you getting "that's what usually happens"?
Mine was a little ill-thought out comment.
I guess I don't think that the meaning of my question was hidden in any significant way. This is leads me to interpret your response less as a genuine concern for specificity that lead to constructive criticism, and more as "I don't like this subject—therefore I will express disagreement with something you did to indicate that." It feels to me as if you're avoiding the subject in favor of nitpicking.
I know you knew what the actual question is because you pointed out vagueness. You knew the question you answered [Literally: Do different races have different brains?] was not the question I intended. Regardless, you didn't attempt to answer the question or really address it at all. Instead, you pointed out a way it could be misinterpreted if someone took the effort to avoid all context and assume I was asking a nonsensical question (which people do not usually do, unless there's some political-esque intent behind it).
My apologies if you have a genuine concern regarding the specificity of my question—but I implore you to try to answer the actual question, anyway.
This does not addresses my question. The implication is "... shouldn't it follow that different races could have different brains—such that these differences are generalizable according to race?"
I think this implication was obvious. For example, if someone were to ask "Do different races typically have different skin colors?" I don't think you would answer "Different people of the same race have different skin colors. No two skin colors are exactly the same. You have to make statements that are less vague."
Edit: If, in fact, that is the way you would answer, then I'm mistaken, but I don't think that's necessary.
I don't understand why this comment is met with such opposition. Calories are the amount of energy a food contains. If you use more energy than you take in, then you have to lose weight [stored energy]. There's literally no other way it could work.
The statement can even be further simplified to:
All people who create a calorie deficit lose weight.
The fact that selection pressure for mental ability is everywhere present is an excellent point; thanks. As to why it's a troublesome subject, I always maintain "If there is a quantitative difference, I sure as hell hope we never find it."
I think that'd lead to some pretty unfortunate stuff.
I don't think this is a stupid question, but everyone else seems to—that is, the immediate reaction to it is usually "there's obviously no difference." I've struggled with this question a lot, and the commonly accepted answer just doesn't sit well with me.
If different races have different skin, muscle/bone structure, genetics, and maybe other things, shouldn't it follow that different races could have different brains, too?
I know this is taboo, and feel the following sort of disclaimer is obligatory: I'm not racist, nor do I think any difference would necessarily be something drastic or significant, but the existence of a difference is something that seems probable to me.
Edit: Though it's obviously included, I'm not talking specifically about intelligence!
I do not care at all about watching other people play sports. Everyone thinks it's super boring.
... doesn't seem to make much sense to me. In what context would he not mean that?
It took me a minute or two to figure out what you were trying to say. For anyone else who didn't get it first-read, I believe Lumifer's saying something like:
"World War II was 60 years ago. On a 1,400 year timescale, that's not getting somewhere, that's just a random blip of time where no gigantic wars happened; those blips have happened before. What do you mean 'to get to where we are now'?"
Now, to answer that, I think he means "to get to a society where fear of being killed or kidnapped (then killed) isn't a normal part of every day life, and women can wear whatever they want."
I know this post is five years old, but can someone explain this to me? I understood that both questions could have an answer of no because one may want to minimize the monetary loss / maximize the monetary gain of the poorer family—therefore, the poorer family should get a higher reduction and a lower penalty. Am I misunderstanding something about the situation?
Ah! This puts everything into a sensible context—thank you.
I'd like to have a conversation on said fairness sometime; maybe I'll make a thread about it.
Sorry, I'm a bit confused. Not being fully versed in the terminology of utilitarians, I may be somewhat in the dark...
... but, is the point of this piece "Money should be the unit of caring" or "Money is the unit of caring"? I expected it to be the latter, but it reads to me like the former, with examples as to why it currently isn't. That is, if money were actually the unit of caring—if people thought of how much money they spend on something as synonymous with how much they care about something—then a lawyer would hire someone to work five hours at a soup kitchen instead of working there for an hour.
It sounds like, as it is now, money isn't the unit of caring and you think it should be. But the end again reads more like the latter statement. Which one was your intent?
I... I don't actually understand why this comment got so many downvotes—and I'm 100% for cryonics. In fact, I agree with the above comment.
Is this a toxic case of downvoting?
I really like this idea, but I can't tell whether I failed the test, I passed the test, or the article-selection for this test was bad.
- I very much felt the "condemnation of the hated telecoms" (and a bit of victory-hope). I think this means I've failed the test.
- It took no time to realize that I was reading a debate over a definition and its purpose. I think this means I've passed the test.
- I feel like the above realization was trivial. I didn't consciously think "I am reading a debate of definition. " In the same way that, when I'm playing a scary game, I don't think "I am playing a scary game." I thought the whole point of the article was to emphasize a debate of definition, why said debate is happening, and what side "won". I think this means that the article was a bad one for the purposes of this test (that is: using this article feels more like a test of reading comprehension than rationality).
Which is the most appropriate result?
(To reiterate, though, I think this idea is an awesome one.)
Edit: I also don't think the article failed to give information on what the reason behind said definition-changing was:
The FCC was having this debate because Congress requires it to determine whether broadband is being deployed to Americans in a reasonable and timely fashion. The first step is determining what speeds allow for broadband access. Congress made it clear in the Telecommunications Act of 1996 that broadband isn’t the bare minimum needed to use the Internet. Instead, it is “advanced telecommunications capability” that “enable[s] users to originate and receive high-quality voice, data, graphics, and video telecommunications using any technology.”
Edit 2: Now THIS article doesn't emphasize the point that it's purely a matter of definition: http://arstechnica.com/business/2015/01/tons-of-att-and-verizon-customers-may-no-longer-have-broadband-tomorrow/ The article in the OP feels like "We've changed the definition of broadband to increase broadband access." The above-linked article feels like "THEY'RE TAKING AWAY OUR BROADBAND!!!" Does this seem like a reasonable differentiation, or am I being biased?
Wait, IlyaShipitser—I think you overestimate my knowledge of the field of statistics. From what it sounds like, there's an actual, quantitative difference between Bayesian and Frequentist methods. That is, in a given situation, the two will come to totally different results. Is this true?
I should have made it more clear that I don't care about some abstract philosophical difference if said difference doesn't mean there are different results (because those differences usually come down to a nonsensical distinction [à la free will]). I was under the impression that there is a claim that some interpretation of the philosophy will fruit different results—but I was missing it, because everything I've been introduced to seems to give the same answer.
Is it true that they're different methods that actually give different answers?
Sorry, I didn't mean to imply that probabilities only apply to the future. Probabilities apply only to uncertainty.
That is, given the same set of data, there should be no difference between event A happening, and you having to guess whether or not it happened, and event A not having happened yet—and you having to guess whether or not it will happen.
When you say "apply a probability to something," I think:
"If one were to have to make a decision based on whether or not event A will happen, how would one consider the available data in making this decision?"
The only time event A happening matters is if it happening generated new data. In the Bob-Alice situation, Alice rolling a die in separate room gives zero information to Bob—so whether or not she already rolled it doesn't matter. Here are a couple of different situations to illustrate:
A) Bob and Alice are in different rooms. Alice rolls the die and Bob has to guess the number she rolled. B) Bob has to guess the number that Alice's die will roll. Alice then rolls the die. C) Bob watches alice roll the die, but did not see the outcome. Bob must guess the number rolled. D) Bob is a supercomputer which can factor in every infinitesimal fact about how Alice rolls the die, and the die itself upon seeing the roll. Bob-the-supercomputer watches Alice roll the die, but did not see the outcome.
In situations A, B, and C—whether or not Alice rolls the die before or after Bob's guess is irrelevant. It doesn't change anything about Bob's decison. For all intents and purposes, the questions "What did Alice roll?" and "What will Alice roll?" are exactly the same question. That is: We assume the system is simple enough that rolling a fair die is always the same. In situation D, the questions are different because there's different information available depending on whether or not Alice rolled already. That is, the assumption of a simple-system isn't there because Bob is able to see the complexity of the situation and make the exact same kind of decision. Alice having actually rolled the dice does matter.
I don't quite understand your "likely or not likely" question. To try to answer: If an event is likely to happen, then your uncertainty that it will happen is low. If it is not likely, then your uncertainty that it will happen is high.
(Sorry, I totally did not expect this reply to be so long.)
I'm having a hard time answering this question with "yes" or "no":
The event in question is "Alice rolling a particular number on a 6-sided die." Bob, not knowing what Alice rolled, can talk about the probabilities associated with rolling a fair die many times, and base whatever decision he has to make from this probability (assuming that she is, in fact, using a fair die). Depending on the assumed complexity of the system (does he know that this is a loaded die?), he could convolute a bunch of other probabilities together to increase the chances that his decision is accurate.
Yes... I guess?
(Or, are you referring to something like: If Alice rolled a 5, then there is a 100% chance she rolled a 5?)
A quantitative thing that indicates how likely it is for an event to happen.
I still don't understand the apparently substantial difference between Frequentist and Bayesian reasoning. The subject was brought up again in a class I just attended—and I was still left with a distinct "... those... those aren't different things" feeling.
I am beginning to come to the conclusion that the whole "debate" is a case of Red vs. Blue nonsense. So far, whenever one tries to elaborate on a difference, it is done via some hypothetical anecdote, and said anecdote rarely amounts to anything outside of "Different people sometimes treat uncertainty differently in different situations, depending on the situation." (Usually by having one's preferred side make a very reasonable conclusion, and the other side make some absurd leap of psuedo-logic).
Furthermore, these two things hardly ever seem to have anything to do with the fundamental definition of probability, and have everything to do with the assumed simplicity of a given system.
I AM ANGRY
This ends up being somewhat circular then, doesn't it?
Olbers' paradox is only a paradox in an infinite, static universe. A fininte, expanding universe explains the night sky very well. One can't use Olbers' paradox to discredit the idea of an expanding universe when Olbers' paradox depends on the universe being static.
Furthermore, upon re-reading MazeHatter's "The way I see it is..." comment, Theory B does not put us at some objective center of reality. An intuitive way to think about it is: Imagine "space" being the surface of a balloon. Place dots on the surface of the balloon, and blow the balloon up. The distance between dots in all directions expands. One can arbitrarily consider one dot as the "center," but that doesn't change anything.
I'm beginning to think that MazeHatter's comments do not warrant as much discussion as has taken place in this thread. =\
Trial-and-error.
There are, of course, inconsistencies that I'm unaware of: These are known unknowns. The idea, though, is that when I'm presented with a situation, any such relevant inconsistencies come up and are eliminated (either by a change of the foundation or a change of the judgement).
That is, inconsistencies that exist but don't come up aren't relevant.
An example—extreme but illustrative: Say an element of this foundational set is "I want to 'treat everyone equally'". I interview a Blue man for a job and, upon reflecting, think very negatively of him, even though he's more qualified than others. When I review the interview as if I were a 3rd party [ignorant of any differences between Blue people and regular people], I come to the conclusion that the interview was actually pretty solid.
I now have a choice to make. Do I actually want to treat people equally? If so, then I must think differently of this Blue man, his Blue people, give him this job, and make a very conscious effort to incorperate Blue people into my "everybody" perception. This is a change in judgement. Or, maybe I don't want to treat everyone equally—maybe I want to treat everyone who's not Blue equally. This is a change in foundation (but this change in foundation would have to coincide with the other elements in the foundation-set; or those, too, would change).
But, until now, my perception of Blue people was irrelevant.
Perhaps it would have been best to say: The process by which I make moral decisions is built to maximize for consistency. A lot goes into this. everything from honing the ability to look at a situation as a 3rd party, to comparing a decision with decisions I've made in the past. As a result, there's a very practiced part of me that immediately responds to nigh all situations with "Is this inconsistent?"
(An unrelated note: Are there things in this post I could have eliminated to get the same point across, but be more succint? I often feel as if my responses [in general] are too long.)
Haha, that's what I do.
If my cost is $14.32, I know $1.43 is 10%, and half of that is about $0.71, so the tip's $2.14 (though I tip 20%, which is even easier).
Yes and no. It's a different experience—like taking a bath and going swimming.
Why is the Newcomb problem... such a problem? I've read analysis of it and everything, and still don't understand why someone would two-box. To me, it comes down to:
1) Thinking you could fool an omniscient super-being 2) Preserving some strictly numerical ideal of "rationality"
Time-inconsistency and all these other things seem totally irrelevant.
I have a fundamental set of morals from which I build my views. They aren't explicit, but my moral decisions all form a consistent web. Sometimes one of these moral-elements must be altered because of some inconsistency it presents, and sometimes my judgement of a situation must be altered because it was inconsistent with the foundation. But ultimately, consistency is what I aim for. I know this is super vague, and for that I apologize.
So far, this has worked for me 100% of the time. There have been some sticky situations, but even those have been worked out (i.e. - Answering the question, for example, "Is it ok for a father to leave if his [very] mentally unstable significant other tricked him into impregnating her?" This did not happen to me, but it was a question I had to answer none the less.)
Perhaps to my discredit according to some LWers: I often think trying to quantify morals with numbers has enough uncertainty associated with it that it is useless.
In this country, we charge students tens of thousands of dollars for that diploma. In fact, at my "public" university, first year students are required to:
- Live on campus (this comes out to about $700 per-person, per-month, for a very tiny room you share with other people)
- Purchase a meal plan ($1,000 - $2,500 a semester)
Of course, these and all other services (except teaching and research) are privately owned.
Otherwise, everything's pretty much the same.
I check it once a day. My work e-mail a few more times if I'm working (which involve a constant correspondance w/ people).
We observe a finite amount of light from a finite distances.
That's an empirical fact.
That is to say, the empirical and theoretical range of electromagnetic radiation are not in agreement.
Why does observing a finite amount of light from a finite distance contradict anything about the range of electromagnetic radiation?
(Also... has anyone read http://en.wikipedia.org/wiki/Redshift? It's... well... good.)
Hi. I apologize: this is a pretty long reply—but thanks very much for your comment. :) I really appreciate the opportunity to follow up like this on something I said a few years ago.
My thoughts on being poly. haven't changed. I still think it's the most functional romantic outlook. Although, after re-reading my comment: "without introducing new problems in their place" is somewhat of a loaded statement. If someone has a difficult time being polyamorous, then it introduces a lot of problems. Not to dwell on this too much, but that part of the comment was a bit circular: If a person is already poly., then, of course having a poly. relationship solves problems. Perhaps it would have been best to say "Since the society in which we're raised is largely monogamous, being poly. solves a lot of problems at the cost of having to exert more emotional effort against that norm."
... or something to that extent.
Three and a half years ago, I thought the likelihood of my entering a relationship in the first place to be very low. I didn't want to be in a relationship at all. This thought persisted up until the beginning of my relationship a little over a year ago (which was just a friendship at first, but ended up functionally being the same thing as a relationship—so we just went with it).
I wouldn't call our relationship polyamorous, but it's open. Pretty early on, we had a discussion that essentially came down to "You do what you want and I do what I want." That is, if I sleep with someone else, then it's ok—and if she sleeps with someone else, then it's ok. This is pretty limited to physical activity. I don't think either of us would be comfortable with "I have another girlfriend/boyfriend."
This actually did not take any hacking; it came very naturally to the both of us, and I am happy with it. In fact, I really like this relationship. When I talk about it, I feel like a grandfather showing off pictures of his grandchildren—in a "Look at this, look at this!" sense. I have to consciously stop myself from talking too much, haha. (Thinking about it a bit further, too, actively having multiple girlfriends might be more effort than I'd want to exert.)
We both expect this to be a long-term relationship, and think it most definitely has the potential to last a long time.
This may be an unrelated question, but I've seen a lot of similar exercises here—is the general implication that:
1 Person tortured for 1 year = 365 people tortured for 1 day = 8760 people tortured for 1 hour = 525600 people tortured for 1 minute?
... Oh.
Hm. In that case, I think I'm still missing something fundamental.
I mean moreso: Consider a FAI so advanced that it decides to reward all beings who did not contribute to creating Roko's Basilisk with eternal bliss, regardless of whether or not they knew of the potential existence of Roko's Basilisk.
Why is Roko's Basilisk any more or any less of a threat than the infinite other hypothetically possible scenarios that have infinite other (good and bad) outcomes? What's so special about this one in particular that makes it non-negligible? Or to make anyone concerned about it in the slightest? (That is the part I'm missing. =\ )
I don't understand why Roko's Basilisk is any different from Pascal's Wager. Similarly, I don't understand why its resolution is any different than the argument from inconsistent revelations.
Pascal's Wager: http://en.wikipedia.org/wiki/Pascal%27s_Wager
Argument: http://en.wikipedia.org/wiki/Argument_from_inconsistent_revelations#Mathematical_description
I would actually be surprised (really, really surprised) if many people here have not heard of these things before—so I am assuming that I'm totally missing something. Could someone fill me in?
(Edit: Instead of voting up or down, please skip the mouse-click and just fill me in. :l )
To those who seem to not like the manner in which XiXiDu is apologizing: If someone who genuinely thinks the sky is falling apologizes to you while still wearing their metal hat—then that's the best you can possibly expect. To reject the apology until the hat is removed is...