Stupid Questions December 2014
post by Gondolinian · 2014-12-08T15:39:25.235Z · LW · GW · Legacy · 342 commentsContents
342 comments
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
342 comments
Comments sorted by top scores.
comment by Richard Korzekwa (Grothor) · 2014-12-10T05:31:19.405Z · LW(p) · GW(p)
It seems like we suck at using scales "from one to ten". Video game reviews nearly always give a 7-10 rating. Competitions with scores from judges seem to always give numbers between eight and ten, unless you crash or fall, and get a five or six. If I tell someone my mood is a 5/10, they seem to think I'm having a bad day. That is, we seem to compress things into the last few numbers of the scale. Does anybody know why this happens? Possible explanations that come to mind include:
People are scoring with reference to the high end, where "nothing is wrong", and they do not want to label things as more than two or three points worse than perfect
People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)
I'm succumbing to confirmation bias and this isn't a real pattern
↑ comment by jaime2000 · 2014-12-10T11:22:27.621Z · LW(p) · GW(p)
I'm succumbing to confirmation bias and this isn't a real pattern
No, this is definitely a real pattern. YouTube switched from a 5-star rating system to a like/dislike system when they noticed, and videogames are notorious for rank inflation.
↑ comment by gjm · 2014-12-10T15:46:05.453Z · LW(p) · GW(p)
Partial explanation: we interpret these scales as going from worst possible to best possible, and
- games that get as far as being on sale and getting reviews are usually at least pretty good because otherwise there'd be no point selling them and no point reviewing them
- people entering competitions are usually at least pretty good because otherwise they wouldn't be there
- a typical day is actually quite a bit closer to best possible than worst possible, because there are so many at-least-kinda-plausible ways for it to go badly
One reason why this is only a partial explanation is that "possible" obviously really means something like "at least semi-plausible" and what's at least semi-plausible depends on context and whim. But, e.g., suppose we take it to mean something like: take past history, discard outliers at both ends, and expand the range slightly. Then I bet what you find is that
- most games that go on sale and attract enough attention to get reviewed are broadly of comparable quality
- but a non-negligible fraction are quite a lot worse because of some serious failing in design or management or something
- most performances in competitions at a given level are broadly of comparable quality
- but a non-negligible fraction are quite a lot worse because the competitor made a mistake of some kind
- most of a given person's days are roughly equally satisfactory
- but a non-negligible fraction are quite a lot worse because of illness, work stress, argument with a family member, etc.
so that in order for a scale to be able to cover (say) 99% of cases it needs to extend quite a bit further downward than upward relative to the median case.
Replies from: Capla↑ comment by Capla · 2014-12-12T02:02:01.907Z · LW(p) · GW(p)
a typical day is actually quite a bit closer to best possible than worst possible, because there are so many at-least-kinda-plausible ways for it to go badly
Think about it in therms of probability space. If somthign is basically functional, then there are a near- infinite number of ways for it to be worse, but a finite number of ways for it to get better.
↑ comment by Gavin · 2014-12-10T21:41:14.062Z · LW(p) · GW(p)
RottenTomatoes has much broader ratings. The current box office hits range from 7% to 94%. This is because they aggregate binary "positive" and "negative" reviews. As jaime2000 notes, Youtube has switched to a similar rating system and it seems to keep things very sensitive.
↑ comment by MathiasZaman · 2014-12-10T13:10:40.136Z · LW(p) · GW(p)
People are thinking in terms of grades, where 75% is a C. People think most things are not worse than a C grade (or maybe this is just another example of the pattern I'm seeing)
I don't think it's this. Belgium doesn't use letter-grading and still succumbs to the problem you mentioned in areas outside the classroom.
Replies from: Capla↑ comment by Capla · 2014-12-12T02:02:36.745Z · LW(p) · GW(p)
What do they use instead?
Replies from: MathiasZaman↑ comment by MathiasZaman · 2014-12-12T08:11:45.648Z · LW(p) · GW(p)
Points out of a maximum. The teacher is supposed to decide in advance how much points a test will be worth (5, 10, 20 and 25 being common options, but I've also had tests where I scored 17,26/27) and then decides how much points a question will be worth. You need to get half of the maximum or more for a passing grade.
That's in high school. In university everything is scored out of a maximum of 20 points.
↑ comment by gwern · 2014-12-10T19:13:59.025Z · LW(p) · GW(p)
You may find the work of the authors of http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2369332 interesting.
↑ comment by someonewrongonthenet · 2014-12-16T23:45:21.066Z · LW(p) · GW(p)
People are thinking in terms of grades
That's not an explanation, just a symptom of the problem. People of mediocre talent and high talent both get A - that's part of the reason why we have to use standardized tests with a higher ceiling.
My intuition is that the top few notches are satisficing, whereas all lower ratings are varying degrees of non-satisficing. The degree to which everything tends to cluster at the top represents the degree to which everything is satisfactory for practical purposes. In situations where the majority of the rated things are not satisfactory (like the Putnam - nothing less than a correct proof is truly satisfactory), the ratings will cluster near the bottom.
For example, compare motels to hotels. Motels always have fewer stars, because motels in general are worse. Whereas, say, video games will tend to cluster at the top because video games in general are satisfactorily fun.
Or, think Humanities vs. Engineering grades. Humanities students in general satisfy the requirements to be historians and writers or liberal-arts-educated-white-collar workers more than Engineering students satisfy the requirements to be engineers.
Replies from: Grothor↑ comment by Richard Korzekwa (Grothor) · 2014-12-17T05:17:18.231Z · LW(p) · GW(p)
That's not an explanation, just a symptom of the problem.
This is what I was trying to convey when I said it might be another example of the problem.
I think it's reasonable, in many contexts, to say that achieving 75% of the highest possible score on an exam should earn you what most people think of as a C grade (that is, good enough to proceed with the next part of your education, but not good enough to be competitive).
I would say that games are different. There is not, as far as I know, a quantitative rubric for scoring a game. A 6/10 rating on a game does not indicate that the game meets 60% of the requirements for a perfect game. It really just means that it's similar in quality to other games that have received the same score, and usually a 6/10 game is pretty lousy. I found a histogram of scores on metacritic:
http://www.giantbomb.com/profile/dry_carton/blog/metacritic-score-distribution-graphs/82409/
The peak of the distributions seems to be around 80%, while I'd eyeball the median to be around 70-75%. There is a long tail of bad games. You may be right that this distribution does, in some sense, reflect the actual distribution of game quality. My complaint is that this scoring system is good at resolving bad games from truly awful games from comically terrible games, but it is bad at resolving a good game from a mediocre game.
What I think it should be is a percentile-based score, like Lumifer describes:
Consider this example: I come up to you and ask "So, how was the movie?". You answer "I give it a 6 out of 10". Fine. I have some vague idea of what you mean. Now we wave a magic wand and bifurcate reality.
In branch 1 you then add "The distribution of my ratings follows the distribution of movie quality, savvy?" and let's say I'm sufficiently statistically savvy to understand that. But... does it help me? I don't know the distribution of movie quality. it's probably bell-shaped, maybe, but not quite normal if only because it has to be bounded, I have no idea if its skewed, etc.
In branch 2 you then add "The rating of 6 means I rate the movie to be in the sixth decile". Ah, that's much better. I now know that out of 10 movies that you've seen five were probably worse and three were probably better. That, to me, is a more useful piece of information.
Then again, maybe it's difficult to discern a difference in quality between a 60th percentile game and an 80th percentile game.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-12-17T22:40:24.429Z · LW(p) · GW(p)
This is what I was trying to convey when I said it might be another example of the problem.
Oh right, I didn't read carefully sorry.
↑ comment by knb · 2014-12-11T02:19:31.526Z · LW(p) · GW(p)
I've noticed the same thing. Part of it might be that reviewers are reluctant to alienate fans of [thing being reviewed]. Another explanation is that they are intuitively norming against a wider degree of things than they actually review. For example, I was buying a smartphone recently, and a lot of lower-end devices I was considering had few reviews, but famous high-end brands (like iPhone Galaxy S, etc.) are reviewed by pretty much everyone.
Playing devil's advocate, it might be that there are more perceivable degrees of badness/more ways to fail than there are of goodness, so we need a wider range of numbers to describe and fairly rank the failures.
↑ comment by Kindly · 2014-12-10T18:13:53.282Z · LW(p) · GW(p)
Math competitions often have the opposite problem. The Putnam competition, for example, often has a median score of 0 or 1 out of 120.
I'm not sure this is a good thing. Participating in a math competition and getting 0 points is pretty discouraging, in a field where self-esteem is already an issue.
Replies from: alienist↑ comment by hyporational · 2014-12-15T02:16:14.785Z · LW(p) · GW(p)
In medicine we try to make people rate their symptoms, like pain, from one to ten. It's pretty much never under 5. Of course there's a selection effect and people don't like to look like whiners but I'm not convinced these fully explain the situation.
In Finland the lowest grade you can get from primary education to high school is 4 so that probably affects the situation too.
Replies from: DanArmak, Grothor↑ comment by DanArmak · 2014-12-21T20:50:37.301Z · LW(p) · GW(p)
In medicine we try to make people rate their symptoms, like pain, from one to ten. It's pretty much never under 5.
How do you then interpret their responses? Do you compare only the responses of the same person at different times, or between persons (or to guide initial treatment)? Do you have a reference scale that translates self-reported pain to something with an objective referent?
Replies from: hyporational↑ comment by hyporational · 2014-12-22T12:20:48.892Z · LW(p) · GW(p)
Do you compare only the responses of the same person at different times
Yes. There's too much variation between persons. I also think there's variation between types of pain and variation depending on whether there are other symptoms. There are no objective specific referents but people who are in actual serious pain usually look like it, are tachycardic, hypertensive, aggressive, sweating, writhing or very still depending on what type of pain were talking about. Real pain is also aggravated by relevant manual examinations.
↑ comment by Richard Korzekwa (Grothor) · 2014-12-15T18:33:06.120Z · LW(p) · GW(p)
In medicine we try to make people rate their symptoms, like pain, from one to ten. It's pretty much never under 5.
This is actually what initially got me thinking about this. I read a half-satire thing about people misusing pain scales. Since my only source for the claim that people do this was a somewhat satirical article, I didn't bring it up initially.
I was surprised when I heard that people do this, because I figured most people getting asked that question aren't in near as much pain as they could be, and they don't have much to gain by inflating their answer. When I've been asked to give an answer on the pain scale, I've almost always felt like I'm much closer to no pain than to "the worst pain I can imagine" (which is what I was told a ten is), and I can imagine being in such awful pain that I can't answer the question. I think I answered seven one time when I had a bone sticking through my skin (which actually hurt less than I might have thought).
Replies from: DanArmak↑ comment by DanArmak · 2014-12-21T20:48:14.041Z · LW(p) · GW(p)
most people getting asked that question aren't in near as much pain as they could be, and they don't have much to gain by inflating their answer.
Maybe they think that by inflating their answer they gain, on the margin, better / more intensive / more prompt medical service. Especially in an ER setting where they may intuit themselves to be competing against other patients being triaged and asked the same question, they might perceive themselves (consciously or not) to be in an arms race where the person who claims to be experiencing the most pain gets treated first.
↑ comment by wadavis · 2014-12-10T22:18:02.185Z · LW(p) · GW(p)
I tried to change out the 10 rating for a z-score rating in my own conversations. It failed due to my social circles not being familiar with the normal bell curve.
Replies from: gwern↑ comment by gwern · 2014-12-11T00:00:11.363Z · LW(p) · GW(p)
If you wanted to maximize the informational content of your ratings, wouldn't you try to mimick a uniform distribution?
Replies from: wadavis, ChristianKl↑ comment by wadavis · 2014-12-12T15:54:32.436Z · LW(p) · GW(p)
The intent was to communicate one piece of information without confusion: where on the measurement spectrum the item fits relative to others in its group. As opposed to delivering as much information as possible, for which there are more nuanced systems.
Most things I am rating do not have a uniform distribution, I tried to follow a normal distribution because it would fit the greater majority of cases. We lose information and make assumptions when we measure data on the wrong distribution, did you fit to uniform by volume or by value? It was another source of confusion.
As mentioned, this method did fail. I changed my methods to saying 'better than 90% of the items in its grouping' and had moderate success. While solving the uniform/normal/Chi-squared distribution problem it is still too long winded for my tastes.
Replies from: Lumifer↑ comment by Lumifer · 2014-12-12T16:00:23.311Z · LW(p) · GW(p)
Most things I am rating do not have a uniform distribution
The distribution of your ratings does not need to follow the distribution of what you are rating. For maximum information your (integer) rating should point to a quantile -- e.g. if you're rating on a 1-10 scale your rating should match the decile into which the thing being rated falls. And if your ratings correspond to quantiles, the ratings themselves are uniformly distributed.
Replies from: wadavis↑ comment by wadavis · 2014-12-12T16:30:35.983Z · LW(p) · GW(p)
We have different goals. I want to my rating to reflect the items relative position in its group, you want a rating to reflect the items value independent of the group.
Is this accurate?
Replies from: Lumifer↑ comment by Lumifer · 2014-12-12T16:56:48.052Z · LW(p) · GW(p)
Doesn't seem so. If you rate by quintiles your rating effectively indicates the rank of the bucket to which the thing-being-rated belongs. This reflects "the item's relative position in its group".
If you want your rating to reflect not a rank but something external, you can set up a variety of systems, but I would expect that for max information your rating would have to point a quintile of that external measure of the "value independent of the group".
Replies from: wadavis↑ comment by wadavis · 2014-12-12T18:47:30.314Z · LW(p) · GW(p)
Trying to stab at the heart of the issue: I want the distribution of the ratings to follow the distribution of the rated because when looking at the group this provides an additional piece of information.
Replies from: Lumifer↑ comment by Lumifer · 2014-12-12T20:31:51.296Z · LW(p) · GW(p)
Well, at this point the issue becomes who's looking at your rating. This "additional piece of information" exists only for people who have a sufficiently large sample of your previous ratings so they understand where the latest rating fits in the overall shape of all your ratings.
Consider this example: I come up to you and ask "So, how was the movie?". You answer "I give it a 6 out of 10". Fine. I have some vague idea of what you mean. Now we wave a magic wand and bifurcate reality.
In branch 1 you then add "The distribution of my ratings follows the distribution of movie quality, savvy?" and let's say I'm sufficiently statistically savvy to understand that. But... does it help me? I don't know the distribution of movie quality. it's probably bell-shaped, maybe, but not quite normal if only because it has to be bounded, I have no idea if its skewed, etc.
In branch 2 you then add "The rating of 6 means I rate the movie to be in the sixth decile". Ah, that's much better. I now know that out of 10 movies that you've seen five were probably worse and three were probably better. That, to me, is a more useful piece of information.
Replies from: wadavis↑ comment by ChristianKl · 2014-12-12T16:42:22.883Z · LW(p) · GW(p)
Quite often the difference between the top 10 percent is higher than the difference of the people between 45% and 55%.
IQ scales have more people in the middle than on the edges.
Replies from: Lumifercomment by NancyLebovitz · 2014-12-08T22:05:27.125Z · LW(p) · GW(p)
Is there any plausible way the earth could be moved away from the sun and into an orbit which would keep the earth habitable when the sun becomes a red giant?
Replies from: calef, CBHacking, mwengler, DaFranker, shminux, JoshuaZ, DanArmak, Daniel_Burfoot↑ comment by calef · 2014-12-08T22:59:42.852Z · LW(p) · GW(p)
According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.
This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 10^10 in Joules / Kilogram, or about 3.4 10^34 Joules when we restore the reduced mass of the earth/sun (which I'm approximating as just the mass of the earth).
Wolframalpha helpfully supplies that this is 28 times the total energy released by the sun in 1 year.
Or, if you like, it's equivalent to the total mass energy of ~3.7 * 10^18 Kilograms of matter (about 1.5% the mass of the asteroid Vespa).
So until we're able to harness and control energy on the order of magnitude of the total energetic output of the sun for multiple years, we won't be able to do this any time soon.
There might be an exceedingly clever way to do this by playing with orbits of nearby asteroids to perturb the orbit of the earth over long timescales, but the change in energy we're talking about here is pretty huge.
Replies from: Eniac↑ comment by Eniac · 2014-12-09T01:10:30.134Z · LW(p) · GW(p)
I think you have something there. You could design a complex, but at least metastable orbit for an asteroid sized object that, in each period, would fly by both Earth and, say, Jupiter. Because it is metastable, only very small course corrections would be necessary to keep it going, and it could be arranged such that at every pass Earth gets pushed out just a little bit, and Jupiter pulled in. With the right sized asteroid, it seems feasible that this process could yield the desired results after billions of years.
Replies from: Kyre↑ comment by Kyre · 2014-12-09T05:13:05.269Z · LW(p) · GW(p)
I thought this sounded familiar
Replies from: Eniac↑ comment by CBHacking · 2014-12-08T23:07:32.976Z · LW(p) · GW(p)
Ignoring the concept of "can we apply that much delta-V to a planet?", I'd be interested to know whether it's believed that there exists a "Goldilocks zone" suitable for life at all stages of a star's life. Intuitively it seems like there should be, I'm not sure.
Of course, it should be pointed out that the common understanding of "when the sun becomes a red giant" may be a bit flawed; the sun will cool and expand, then collapse. On a human time scale, it will spend a lot of that time as a red giant, but if you simply took the Earth when its orbit started to be crowded by the inner edge of the Goldilocks zone and put it in a new orbit, that new orbit wouldn't be anywhere close to an eternally safe one. Indeed, I suspect that the outermost of the orbits required for the giant-stage sun would be too far from the sun at the time we'd first need to move the Earth.
↑ comment by mwengler · 2014-12-11T21:33:32.719Z · LW(p) · GW(p)
The sun's luminosity will rise by around 300X as it turns into a giant. If we wish to keep the same energy flux onto the earth at that point, we must increase the earth's orbit a factor of sqrt(300) = 17X. The total energy of the earth's current orbit is 2.65E33 J. We must reduce this to 1/17 of its current value. or reduce it by (16/17)*2.65E33 J = 2.5E33 J. The current total annual energy production in the world is about 5E17 J. The sun will be a red giant in about 7.6E9 years. So we would need about a million times current global energy production running full time into rocket motors to push the earth out to a safe orbit by the time the sun has expanded.
But it is worse than that. The Sun actually expands over a scant 5 million years near the end of that 7.6E9 years. So to avoid freezing for billions of years because we have started moving away from the sun too soon, we essentially will need a billion times current energy production running into rocket engines for those 5 million years of solar expansion. But the good news is we have 7.6E9 billion years to figure out how to do that.
If we use plasma rockets which push reaction mass out at 1% the speed of light, then we will need a total of about 6E16 kg reaction mass, or about 0.000001% of the earth's total mass. The total mass of water on the earth is about 1E21 kg so we could do all of this using water as reaction mass and still have 99.99% of the water left when we are done.
Replies from: Nornagest↑ comment by DaFranker · 2014-12-09T15:21:35.563Z · LW(p) · GW(p)
I'm curious about the thought process that led to this being asked in the "stupid questions" thread rather than the "very advanced theoretical speculation of future technology" thread. =P
As a more serious answer: Anything that would effectively give us a means to alter mass and/or the effects of gravity in some way (if there turns out to be a difference) would help a lot.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-12-09T16:02:35.794Z · LW(p) · GW(p)
I wasn't sure there was a way to do it within current physics.
Now we get to the hard question: supposing we (broadly interpreted, it will probably be a successor species) want to move the earth outwards using those little gravitational nudges, how do we get civilizations with a sufficiently long attention span?
Replies from: DaFranker, DanielLC↑ comment by DaFranker · 2014-12-09T16:49:08.393Z · LW(p) · GW(p)
[...] how do we get civilizations with a sufficiently long attention span?
I heard Ritalin has a solution. Couldn't pay attention long enough to verify. ba-dum tish
On a serious note, isn't the whole killing-the-Earth-for-our-children thing a rather interesting scenario? I've never seen it mentioned in my game theory-related reading, and I find that to be somewhat sad. I'm pretty sure a proper modeling of the game scenario would cover both climate change and eaten-by-red-giant.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-12-09T17:10:01.293Z · LW(p) · GW(p)
I don't see the connection to killing the earth for our children. Moving the earth outwards is an effort to save the earth for our far future selves and our children.
Replies from: gjm↑ comment by gjm · 2014-12-09T19:11:39.724Z · LW(p) · GW(p)
I think "for our children" means "as far as our children are concerned" and failing to move the earth's orbit so it doesn't get eaten by the sun (despite being able to do it) would qualify as "killing the earth for our children". (The more usual referents being things like resource depletion and pollution with potentially disastrous long-term effects.)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-12-09T19:17:26.948Z · LW(p) · GW(p)
Thanks. That makes sense.
↑ comment by Shmi (shminux) · 2014-12-09T19:45:48.268Z · LW(p) · GW(p)
Not "when the sun becomes a red giant", because red giants are variable on a much too short time scale, but, as others mentioned, we can probably keep the earth in a habitable zone for another 5 billion years or so. We have more than enough hydrogen on earth to provide the necessary potential energy increase with fusion-based propulsion, though building something like a 100 petaWatt engine is problematic at this point, (for comparison, it is a significant fraction of the total solar radiation hitting the earth).
EDIT: I suspect that terraforming Mars (and/or cooling down the Earth more efficiently when the Sun gets brighter) would require less energy than moving the Earth to the Mars orbit. My calculations could be off, though, hopefully someone can do them independently.
Replies from: Anomylous↑ comment by Anomylous · 2014-12-09T20:31:17.990Z · LW(p) · GW(p)
Only major problem I know of with terraforming Mars is how to give it a magnetic field. We'd have to somehow re-melt the interior of the planet. Otherwise, we could just put up with constant intense solar radiation, and atmosphere off-gassing into space. Maybe if we built a big fusion reactor in the middle of the planet...?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-12-09T21:40:09.322Z · LW(p) · GW(p)
I recall estimating the power required to run an equatorial superconducting ring a few meters thick 1 km or so under the Mars surface with enough current to simulate Earth-like magnetic field. If I recall correctly, it would require about the current level of power generation on Earth to ramp it up over a century or so to the desired level. Then whatever is required to maintain it (mostly cooling the ring), which is very little. Of course, an accident interrupting the current flow would be an epic disaster.
Replies from: alienist↑ comment by alienist · 2014-12-11T06:17:38.545Z · LW(p) · GW(p)
Wouldn't it be more efficient to use that energy to destroy Mars and build start building a Dyson swarm from the debris?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-12-11T16:13:49.948Z · LW(p) · GW(p)
Let's do a quick estimate. Destroying a Mars-like planet requires expending the equivalent of its gravitational self-energy, ~GM^2/R, which is about 10^32J (which we could easily obtain from a comet 10 kn in radius... consisting of antimatter!) For comparison, the Earth's magnetic field has about 10^26J of energy, a million times less. I leave it to you to draw the conclusions.
↑ comment by JoshuaZ · 2014-12-08T23:52:54.921Z · LW(p) · GW(p)
Yes, I saw an article a few years ago a back of the envelope estimate that suggested this would be doable if one could turn mass on the moon more or less directly to energy and use the moon as a gravitational tug to slowly move Earth out of the way. You can change mass almost directly into energy by feeding the mass into a few smallish blackholes.
Replies from: blogospheroid↑ comment by blogospheroid · 2014-12-09T09:44:00.189Z · LW(p) · GW(p)
How do they propose to move the blackholes? Nothing can touch a blackhole, right?
Replies from: gjm, DanielLC↑ comment by DanielLC · 2014-12-09T18:03:21.430Z · LW(p) · GW(p)
It can, as long as you don't mind that you won't get it back when you're done. You have to constantly fuel the black hole anyway. Just throw the fuel in from the opposite direction that you want the black hole to go.
Replies from: Eniac↑ comment by Eniac · 2014-12-10T04:34:38.190Z · LW(p) · GW(p)
Throwing mass into a black hole is harder than it sounds. Conveniently sized black holes that you actually would have a chance at moving around are extremely small, much smaller than atoms, I believe. I think they would just sit there without eating much, despite strenous efforts at feeding them. The cross-section is way too small.
To make matters worse, such holes would emit a lot of Hawking radiation, which would a) interfere with trying to feed them, and b) quickly evaporate them ending in an intense flash of gamma rays.
Replies from: DanielLC↑ comment by DanielLC · 2014-12-10T06:56:13.849Z · LW(p) · GW(p)
The problem is throwing mass into other mass hard enough to make a black hole in the first place.
Hawking radiation isn't a big deal. In fact, the problem is making a black hole small enough to get a significant amount of it. An atom-sized black hole has around a tenth of a watt of Hawking radiation. I think it might be possible to get extra energy from it. From what I understand, Hawking radiation is just what doesn't fall back in. If you enclose the black hole, you might be able to absorb some of this energy.
Replies from: Eniac↑ comment by Eniac · 2014-12-11T03:39:52.906Z · LW(p) · GW(p)
Yes, making them would be incredibly hard, and because of their relatively short lifetimes, it would be extremely surprising to find any lying around somewhere. Atom sized black holes would be very heavy and not produce much Hawking readiation, as you say. Smaller ones would produce more Hawking radiation, be even harder to feed, and evaporate much faster.
↑ comment by DanArmak · 2014-12-21T21:08:26.762Z · LW(p) · GW(p)
I don't really know if it's plausible, but Larry Niven's far-future fiction A World Out of Time (the novel, not the original short story of the same name) deals with exactly this problem.
His solution is a "fusion candle": build a huge double-ended fusion tube, put it in the atmosphere of a gas giant, and light it up. The thrust downwards keeps the tube floating in the atmosphere. The thrust upwards provides an engine to push the gas giant around. In the book, they pushed Uranus to Earth, and then moved it outwards again, gravitationally pulling the Earth along.
↑ comment by Daniel_Burfoot · 2014-12-08T23:00:24.764Z · LW(p) · GW(p)
This is a fascinating question. Very speculatively, I could imagine somehow using energy gained by pushing other objects closer to the Sun, to move the Earth away from the Sun. Like some sort of immense elastic band stretching between Mars and Earth, pulling Earth "up" and Mars "down".
Replies from: DanielLCcomment by knb · 2014-12-08T23:29:52.657Z · LW(p) · GW(p)
Would it be possible to slow down or stop the rise of sea level (due to global warming) by pumping water out of the oceans and onto the continents?
Replies from: Falacer, mwengler, CBHacking, DanielLC, Eniac, Capla↑ comment by Falacer · 2014-12-09T02:05:38.603Z · LW(p) · GW(p)
We could really use a new Aral sea, but intuitively I'd expected that this would be a tiny dent in the depth of the oceans. So, to the maths:
Wikipedia claims that from 1960 to 1998 the volume of the Aral sea dropped from its 1960 amount of 1,100 km^3 by 80%.
I'm going to give that another 5% for more loss since then, as the South Aral Sea has now lost its eastern half enitrely.
This gives ~1100 * .85 = 935km^3 of water that we're looking to replace.
The Earth is ~500m km^2 in surface area, approx. 70% of which is water = 350m km^2 in water.
935 km^3 over an area of 350m km^2 comes to a depth of 2.6 mm.
This is massively larger that I would have predicted, and it gets better. The current salinity of the Aral Sea is 100g/l which is way higher than that of seawater at 35g/l, so we could pretty much pump the water straight in still with net environmental gain. Infact this is a solution to the crisis that has been previously proposed, although it looks like most people would rather dilute the seawater first.
To acheive the desired result of 1 inch drop in sea level, we only need to find 9 equivalent projects around the world. Sadly, the only other one I know of is Lake Chad, which is significantly smaller than the Aral Sea. However, since the loss of the Aral Sea is due to over-intensive use of the water for farming, the gives us an idea of how much water can be contained onland in plants: I would expect that we might be able to get this amount again if we undertook a desalination/irrigation program in the Sahara.
Replies from: mwengler, DanArmak↑ comment by mwengler · 2014-12-11T16:00:19.128Z · LW(p) · GW(p)
Dead Sea and Salton Sea leap to mind as good projects.
Also could we store more water in the atmosphere? If we just poured water into a desert like the Sahara, most of it would evaporate before it flowed back to the sea. This would seem to raise the average moisture content of the atmosphere. Sure eventually it gets rained back down, but this would seem to be a feature more than a bug for a world that keeps looking for more fresh water. Indeed my mind is currently inventing interesting methods for moving the water around using purely the heat from the sun as an energy source.
↑ comment by DanArmak · 2014-12-21T21:13:29.384Z · LW(p) · GW(p)
However, since the loss of the Aral Sea is due to over-intensive use of the water for farming, the gives us an idea of how much water can be contained onland in plants
Isn't it more of an indication of how much water can be contained in the Aral Sea basin? The plants don't need to contain all of the missing Aral Sea water at once, they just need to be watered faster than the Sea is being refilled by rainfall. How much water does rainfall supply every year, as a percentage of the Sea's total volume?
↑ comment by mwengler · 2014-12-11T15:54:59.844Z · LW(p) · GW(p)
I recommend googling "geoengineering global warming" and reading some of the top hits. There are numerous proposals for reducing or reversing global warming which are astoundingly less expensive than reducing carbon dioxide emissions, and also much more likely to be effective.
To your direct question about storing more water on land, this would be a geoengineering project. Some straightforward approaches to doing it:
Use rainfall as your "pump" in order to save having to build massive energy using water pumps. Without any effort on our part, nature natually lifts water a km or more above sea level and then drops it, much of it dropped onto land. That water generally is funneled back to the ocean in rivers. With just the constructino of walls, some rivers might be prevented from draining into the ocean. Large areas would be flooded by the river, storing water other than in the ocean.
Use gravity as your pump. THere are many large locations on earth than are below sea level. Aquifers that took no net energy to do pumping could be built that would essentially gravity-feed ocean water into these areas. These areas can be hundreds of meters below sea level, so if even 1% of the earth's surface is 100 m below sea level, then the ocean's could be lowered by a bit more than 1 m by filling these depressions with ocean water.
Of course either one of these approaches will cause massive other changes, although probably in a positive direction as far as climate is concerned. More water surface on the planet should mean more evaporation of water which reates more clouds which reflects more energy from the sun, lowering the heating of the earth. But of course a non-trivial analysis might yield a rich detail of effects worth pondering.
In the past features like the Salton sea and the Dead sea have been filled by fresh-water rivers, essentially meaning that rain was used as the pump to fill them. The demand for fresh water has stopped these features from being filled. It seems to me that an aquifer to refill these features with salt water from the ocean would be relatively benign in impact, since in nature these features have been fuller of salt water in the past, and so the impact of that water might be blessed by humanity as "natural" instead of cursed by humanity as "man made."
↑ comment by CBHacking · 2014-12-09T00:10:17.018Z · LW(p) · GW(p)
Where does the water go? Assuming you want to reduce sea level by a 1/2 inch using this mechanism, you have to do the equivalent of covering the entire ETA: land area of earth in a full inch of water (what's worse, seawater; you'd want to desalinate it). Even assuming you can find room on land for all this water and the pump capacity to displace it all, what's to stop it from washing right back out to sea? Some of it can be used to refill aquifers, but the capacity of those is trivial next to that of the oceans. Some of it can be stored as ice and snow, but global warming will reduce (actually, has already quite visibly reduced) land glaciation; even if you can somehow induce the water to freeze, that heat you extract from it will have to go somewhere and unless you can dump it out of the atmosphere entirely it will just contribute to the warming. The rest of the water will just flood the existing rivers in its mad rush to do what nearly all continental water is always doing anyhow: flowing to sea.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2014-12-09T18:56:31.966Z · LW(p) · GW(p)
Clearly, the solution is to build a space elevator and ship water into orbit. We lower the sea levels, the water is there if we need it later, and in the meantime we get to enjoy the pretty rings.
(No, I'm not serious.)
Replies from: Vaniver↑ comment by Vaniver · 2014-12-09T19:02:03.211Z · LW(p) · GW(p)
in the meantime we get to enjoy the pretty rings.
Now I'm curious how much energy it would take to set up a stable ring orbit made of ice crystals for Earth, or if that would be impossible without stationkeeping corrections.
Replies from: Lumifer↑ comment by Lumifer · 2014-12-09T19:29:53.327Z · LW(p) · GW(p)
How long will ice survive in Earth's orbit, anyway?
Replies from: CBHacking↑ comment by CBHacking · 2014-12-09T23:26:13.169Z · LW(p) · GW(p)
I think it would depend on the orbit? Obviously it would need to be in an orbit that does not collide with our artificial satellites, and it would need to be high enough to make atmospheric drag negligible, but that leave a lot of potential orbits. I don't think of any reason ice would go away with any particular haste from any of them, but I'm not an expert in this area.
Orbital decay aside, why might ice (once placed into an at-the-time stable orbit) not survive?
Replies from: Lumifer↑ comment by Lumifer · 2014-12-10T01:49:15.188Z · LW(p) · GW(p)
why might ice (once placed into an at-the-time stable orbit) not survive?
Sun.
Solar radiation at 1 AU is about 1.3KW/sq.m. Ice that is not permanently in the shade will disappear rather rapidly, I would think.
Replies from: CBHacking↑ comment by CBHacking · 2014-12-10T07:52:05.158Z · LW(p) · GW(p)
I would think it would lose heat to space fast enough, but maybe not. I know heat dissipation is a major concern for spacecraft, but those are usually generating their own heat rather than just trying to dump what they pick up from the sun. What would happen to the ice / water? It's not like it can just evaporate into the atmosphere...
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-12-10T13:59:05.736Z · LW(p) · GW(p)
It's not like it can just evaporate into the atmosphere...
Vapour doesn't need an atmosphere to take it up. Empty space does just as well.
So, how long would a snowball in high orbit last? Sounds like a question for xkcd. A brief attempt at a lower bound that is probably a substantial underestimate:
How much energy has to be pumped in per kilogram to turn ice at whatever the "temperature" is in orbit into water vapour? Call that E. Let S be the solar insolation of 1.3 kW/m^2. Imagine the ice is a spherical cow, er, a rectangular block directly facing the sun. According to Wikipedia the albedo of sea ice is in the range 0.5 to 0.7. Take that as 0.6, so the fraction of energy retained is A = 0.4. The density of ice is D = 916.7 kg/m^3. Ignore radiative cooling, conduction to the cold side of the iceberg, and time spent in the Earth's shadow, and assume that the water vapour instantly vanishes. Then the surface will ablate at a rate of SA/ED m/s. Equivalently, ED/86400SA days per metre.
For simplicity I'll take the ice to be at freezing point. Then:
E = 334 kJ/kg to melt + 420 kJ/kg to reach boiling point + 2260 kJ/kg to boil = 3014 kJ/kg.
For a lower starting temperature, increase E accordingly.
3014 916.7 / (86400 1.3 * 0.4) = 61 days per metre. Not all that long, but meanwhile, you've created a hazard for space flight and for the skyhook.
I suspect that ignoring radiative cooling will be the largest source of error here, but this isn't a black body, so I don't know how closely the Stefan-Boltzmann law will apply, and I haven't calculated the results if it did. (ETA: The black body temperature of the Moon is just under freezing.)
(ETA: fixed an error in the calculation of E, whereby I had 4200 instead of 420 kJ/kg to reach boiling point. Also, pasting in all the significant figures from the sources doesn't mean this is claimed to be anything more than a rough estimate.)
Replies from: Lumifer↑ comment by Lumifer · 2014-12-10T15:38:13.910Z · LW(p) · GW(p)
to reach boiling point
This is vacuum -- all liquid water will boil immediately, at zero Celsius. Besides I'm sure there will be some sublimation of ice directly to water vapor.
In fact, looking at water's phase diagram, in high vacuum liquid water just doesn't exist so I think ice will simply sublimate without the intermediate liquid stage.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-12-10T16:01:28.157Z · LW(p) · GW(p)
Right, I forgot the effect of pressure. So E will be different, perhaps very different. What will it be?
Replies from: Lumifer↑ comment by Lumifer · 2014-12-10T16:24:39.156Z · LW(p) · GW(p)
Here is the proper math. This is expressed in terms of ice temperature, though, so we'll need to figure out how much the solar flux would heat the outer layer of ice first.
↑ comment by DanielLC · 2014-12-09T18:07:14.220Z · LW(p) · GW(p)
One possibility would be to replace the ice caps by hand. Run a heated pipeline from the ocean to the icecaps, pump water there, and let it freeze on its own. I don't know how well that would work, and I suspect you're better off just letting sea levels rise. If you need the land that bad, just make floating platforms.
Edit: Replace "ice caps" with "Antartica". Adding ice to the northern icecap, or even the southern one out where it's floating, won't alter the sea level since floating objects displace their mass in water.
↑ comment by Eniac · 2014-12-09T01:31:18.491Z · LW(p) · GW(p)
Well, this is not pumping, but it might be much more efficient: As I understand, the polar ice caps are in an equilibrium between snowfall and runoff. If you could somehow wall in a large portion of polar ice, such that it cannot flow away, it might rise to a much higher level and sequester enough water to make a difference in sea levels. A super-large version of a hydroelectric dam, in effect, for ice.
It might also help to have a very high wall around the patch to keep air from circulating, keeping the cold polar air where it is and reduce evaporation/sublimation.
↑ comment by Capla · 2014-12-12T02:06:07.360Z · LW(p) · GW(p)
This should be a what if question. I'd like to see what Randall would do with it.
Replies from: knb↑ comment by knb · 2014-12-12T04:29:28.002Z · LW(p) · GW(p)
I don't know what you mean. Who is Randal?
Replies from: Capla, Lumifer↑ comment by Capla · 2014-12-12T05:07:04.045Z · LW(p) · GW(p)
Randall Munroe Is the person who draws XKCD. He also has a blog where he give in depth answers to unusual questions.
comment by Punoxysm · 2014-12-08T21:00:49.463Z · LW(p) · GW(p)
Can anyone link a deep discussion, including energy and time requirements, issues with spaceship shielding from radiation and collisions, etc., that would be involved in interstellar travel? I ask because I am wondering whether this is substantially more difficult than we often imagine, and perhaps a bottleneck in the Drake Equation
Replies from: Alsadius, gjm, shminux, Eniac, lukeprog↑ comment by Alsadius · 2014-12-09T00:06:15.002Z · LW(p) · GW(p)
tl;dr: It is definitely more difficult than most people think, because most people's thoughts(even scientifically educated ones) are heavily influenced by sci-fi, which is almost invariably premised on having easy interstellar transport. Even the authors like Clarke with difficult interstellar transport assume that the obvious problems(e.g., lightspeed) remain, but the non-obvious problems(e.g., what happens when something breaks when you're two light-years from the nearest macroscopic object) disappear.
↑ comment by gjm · 2014-12-09T02:02:17.179Z · LW(p) · GW(p)
Some comments on this from Charles Stross. Not optimistic about the prospects. Somewhat quantitative, at the back-of-envelope level of detail.
↑ comment by Shmi (shminux) · 2014-12-08T21:08:45.086Z · LW(p) · GW(p)
Project Icarus seems like a decent place to start.
↑ comment by Eniac · 2014-12-10T04:41:14.928Z · LW(p) · GW(p)
You might want to check out Centauri Dreams, best blog ever and dedicated to this issue.
↑ comment by lukeprog · 2014-12-10T03:46:42.914Z · LW(p) · GW(p)
A fair bit of this is either cited or calculated within "Eternity in six hours." See also my interview with one of its authors, and this review by Nick Beckstead.
comment by Anatoly_Vorobey · 2014-12-09T22:35:29.699Z · LW(p) · GW(p)
Is there a causal link between being relatively lonely and isolated during school years and (higher chance of) ending up a more intelligent, less shallow, more successful adult?
Imagine that you have a pre-school child who has socialization problems, finds it difficult to do anything in a group of other kids, to acquire friends, etc., but cognitively the kid's fine. If nothing changes, the kid is looking at being shunned or mocked as weird throughout school. You work hard on overcoming the social issues, maybe you go with the kid to a therapist, you arrange play-dates, you play-act social scenarios with them..
Then your friend comes up to have a heart-to-heart talk with you. Look, your friend says. You were a nerd at school. I was a nerd at school. We each had one or two friends at best and never hung out with popular kids. We were never part of any crowd. Instead we read books under our desks during lessons and read SF novels during the breaks and read science encyclopedias during dinner at home, and started programming at 10, and and and. Now you're working so hard to give your kid a full social life. You barely had any, are you sure now you'd rather you had it otherwise? Let me be frank. You have a smart kid. It's normal for a smart kid to be kind of lonely throughout school, and never hang out with lots of other kids, and read books instead. It builds substance. Having a lousy social life is not the failure scenario. The failure scenario is to have a very full and happy school experience and end up a ditzy adolescent. You should worry about that much much more, and distribute your efforts accordingly.
Is your friend completely asinine, or do they have a point?
Replies from: Viliam_Bur, philh, alienist, John_Maxwell_IV↑ comment by Viliam_Bur · 2014-12-09T23:15:38.838Z · LW(p) · GW(p)
Seems to me that very high intelligence can cause problems with socialization: you are different from your peers, so it is more difficult for you to model them, and for them to model you. You see each other as "weird". (Similar problem for very low intelligence.) Intelligence causes loneliness, not the other way round.
But this depends on the environment. If you are highly intelligent person surrounded by enough highly intelligent people, then you do have a company of intellectual peers, and you will not feel alone.
I am not sure about the relation between reading many books and being "less shallow". Do intelligent kids surrounded by intelligent kids also read a lot?
Replies from: dxu↑ comment by dxu · 2014-12-11T04:53:11.081Z · LW(p) · GW(p)
All of this is very true (for me, anyway--typical mind fallacy and all that). High intelligence does seem to cause social isolation in most situations. However, I also agree with this:
But this depends on the environment. If you are highly intelligent person surrounded by enough highly intelligent people, then you do have a company of intellectual peers, and you will not feel alone.
High intelligence does not intrinsically have a negative effect on your social skills. Rather, I feel that it's the lack of peers that does that. Lack of peers leads to lack of relatability leads to lack of socialization leads to lack of practice leads to (eventually) poor social skills. Worse yet, eventually that starts feeling like the norm to you; it no longer feels strange to be the only one without any real friends. When you do find a suitable social group, on the other hand, I can testify from experience that the feeling is absolutely exhilarating. That's pretty much the main reason I'm glad I found Less Wrong.
Replies from: Tem42↑ comment by Tem42 · 2015-07-04T03:15:47.618Z · LW(p) · GW(p)
It is not true that people cannot - or do not - interact successfully with people that are less intelligent than they are. Many children get along well with their younger siblings. Many adults love being kindergarten teachers... Or feel highly engaged working in the dementia wing of the rest home. Many people of all intelligence levels love having very dumb pets. These are not people (or beings) that you relate to because of their 'relatability' in the sense that they are like you, but because they are meaningful to you. And interacting with people build social skills appropriate to those people -- which may not be very generalizable when you are practicing interacting with kindergarten students, but is certainly a useful skill when you are interacting with average people.
I personally would think that the problem under discussion is not related to intelligence, but in trying to help an introvert identify the most fulfilling interpersonal bonds without making them more social in a general sense. However, I don't know the kid in question, so I can't say.
↑ comment by philh · 2014-12-09T22:57:17.650Z · LW(p) · GW(p)
My friend isn't obviously-to-me wrong, but their argument is unconvincing to me.
It's normal for a smart kid to be kind of lonely - if true, that's sad, and by default we should try to fix it.
It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.
Lousy social life - this is a failure mode. It might not be the worst one, but it seems like the most likely one, so deserving of attention.
Ditzy adolescent - how likely is this?
FWIW, I'm an adult who was kind of lonely as a kid, and on the margin I think that having a more active social life then would have had positive effects on me now.
Replies from: dxu, NancyLebovitz↑ comment by dxu · 2014-12-11T05:08:15.919Z · LW(p) · GW(p)
It's normal for a smart kid to be kind of lonely - if true, that's sad, and by default we should try to fix it.
True, but it may be one of those problems that's just not fixable without seriously restructuring the school system, especially if something like Villiam_Bur's theory is true.
It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.
Speaking from experience, I can tell you that I know a lot more than any of my peers (I'm 16), and practically all of that is due to the reading I did and am still doing. That reading was a direct result of my isolation and would likely not have occurred had I been more socially accepted. I should add that I have never once felt resentment or insecurity due to this, though I have developed a slight sense of superiority. (That last part is something I am working to fix.)
Lousy social life - this is a failure mode. It might not be the worst one, but it seems like the most likely one, so deserving of attention.
I suppose this one depends on how you define a "failure mode". I have never viewed my lack of social life as a bad thing or even a hindrance, and it doesn't seem like it will have many long-term effects either--it's not like I'll be regularly interacting with my current peers for the rest of my life.
Ditzy adolescent - how likely is this?
Again, this depends on how you define "ditzy". Based on my observations of a typical high school student at my age, I would not hesitate to classify over 90% of them as "ditzy", if by "ditzy" you mean "playing social status games that will have little impact later on in life". I shudder at the thought of ever becoming like that, which to me sounds like a much worse prospect than not having much of a social life.
FWIW, I'm an adult who was kind of lonely as a kid, and on the margin I think that having a more active social life then would have had positive effects on me now.
I see. Well, to each his own. I myself cannot imagine growing up with anything other than the childhood I did, but that may just be lack of imagination on my part. Who knows; maybe I would have turned out better than I did if I had had more social interaction during childhood. Then again, I might not have. Without concrete data, it's really hard to say.
Replies from: mindspillage↑ comment by mindspillage · 2014-12-13T08:11:33.157Z · LW(p) · GW(p)
It builds substance - citation neded. It seems like it could just as easily build insecurity, resentment, etc.
Speaking from experience, I can tell you that I know a lot more than any of my peers (I'm 16), and practically all of that is due to the reading I did and am still doing. That reading was a direct result of my isolation and would likely not have occurred had I been more socially accepted. I should add that I have never once felt resentment or insecurity due to this, though I have developed a slight sense of superiority. (That last part is something I am working to fix.)
Reading a ton as a teen was very helpful to me also, but I think I would have still done it if I had a rich social life of people who were also smart and enjoyed reading. Ultimately being around peers who challenge me is more motivating than being isolated; I don't want to be the one dragging behind.
I do feel that I had to learn a fair amount of basic social skills through deliberately watching and taking apart, rather than just learning through doing--making me somewhat the social equivalent of someone who has learned a foreign language through study rather than by growing up a native speaker; I have the pattern of strengths and weaknesses associated with the different approach.
↑ comment by NancyLebovitz · 2014-12-10T04:16:44.433Z · LW(p) · GW(p)
There may be a choice between a lot of time thinking/learning vs. a lot of time socializing.
It seems to me that a lot of famous creative people were childhood invalids, though I haven't heard of any such from recent decades. It may be that the right level of invalidism isn't common any more.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-12-13T09:17:35.834Z · LW(p) · GW(p)
I think I remember reading that famous inventors were likely to be isolated due to illness as children. I think it's unlikely that intelligence is decreased by being well-socialized, but it seems possible to me that people who are very well-socialized might find themselves thinking of fewer original ideas.
comment by gattsuru · 2014-12-08T22:06:04.531Z · LW(p) · GW(p)
Are there any good trust, value, or reputation metrics in the open source space? I've recently established a small internal-use Discourse forum and been rather appalled by the limitations of what is intended to be a next-generation system (status flag, number of posts, tagging), and from a quick overview most competitors don't seem to be much stronger. Even fairly specialist fora only seem marginally more capable.
This is obviously a really hard problem and conflux of many other hard problems, but it seems odd that there are so many obvious improvements available.
((Inspired somewhat by my frustration with Karma, but I'm honestly more interested in its relevance for outside situations.))
Replies from: Viliam_Bur, Lumifer, fubarobfusco, Lumifer↑ comment by Viliam_Bur · 2014-12-09T10:42:17.163Z · LW(p) · GW(p)
Tangentially, is it possible for a good reputation metric to survive attacks in real life?
Imagine that you become e.g. a famous computer programmer. But although you are a celebrity among free software people, you fail to convert this fame to money. So must keep a day job at a computer company which produces shitty software.
One day your boss will realize that you have high prestige in the given metric, and the company has low prestige. So the boss will ask you to "recommend" the company on your social network page (which would increase the company prestige and hopefully increase the profit; might decrease your prestige as a side effect). Maybe this would be illegal, but let's suppose it isn't, or that you are not in a position to refuse. Or you could imagine a more dramatic situation: you are a widely respected political or economical expert, it is 12 hours before election, and a political party has kidnapped your family and threatens to kill them unless you "recommend" this party, which according to their model would help them win the election.
In other words, even a digital system that works well could be vulnerable to attacks from outside of the system, where otherwise trustworthy people are forced to act against their will. A possible defense would be if people could somehow hide their votes; e.g. your boss might know that you have high prestige and the company has low prestige, but has no methods to verify whether you have "recommended" the company or not (so you could just lie that you did). But if we make everything secret, is there a way to verify whether the system is really working as described? (The owner of the system could just add 9000 trust points to his favorite political party and no one would ever find out.)
I suspect this is all confused and I am asking a wrong question. So feel free to answer to question I should have asked.
Replies from: gattsuru, kpreid↑ comment by gattsuru · 2014-12-09T20:19:03.681Z · LW(p) · GW(p)
There are simultaneously a large number of laws prohibiting employers from retaliating against persons for voting, and a number of accusations of retaliation for voting. So this isn't a theoretical issue. I'm not sure it's distinct from other methods of compromising trusted users -- the effects are similar whether the compromised node was beaten with a wrench, got brain-eaten, or just trusted Microsoft with their Certificates -- but it's a good demonstration that you simply can't trust any node inside a network.
(There's some interesting overlap with MIRI's value stability questions, but they're probably outside the scope of this thread and possibly only metaphor-level.)
Interestingly, there are some security metrics designed with the assumption that some number of their nodes will be compromised, and with some resistance to such attacks. I've not seen this expanded to reputation metrics, though, and there are technical limitations. TOR, for example, can only resist about a third of its nodes being compromised, and possibly fewer than that. Other setups have higher theoretical resistance, but are dependent on central high-value nodes that trade resistance against to vulnerability against spoofing.
It seems like there's some value in closing the gap between carrier wave and signal in reputation systems, rather than a discrete reputation system, but my sketched out implementations become computationally intractable quickly.
↑ comment by kpreid · 2014-12-09T18:07:27.286Z · LW(p) · GW(p)
I don't have a solution for you, but a related probably-unsolvable problem is what some friends of mine call “cashing in your reputation capital”: having done the work to build up a reputation (for trustworthiness, in particular), you betray it in a profitable way and run.
… otherwise trustworthy people are forced to act against their will. … But if we make everything secret, is there a way to verify whether the system is really working as described?
This is a problem in elections. In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile), and the question then is whether the vote counting is accurate. I would suggest that the topic of designing fair elections contains the answer to your question insofar as an answer exists.
Replies from: alienist↑ comment by alienist · 2014-12-11T06:57:51.017Z · LW(p) · GW(p)
In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile),
And then there are absentee ballots which potentially make said laws a joke.
↑ comment by Lumifer · 2014-12-09T18:39:18.810Z · LW(p) · GW(p)
Are there any good trust, value, or reputation metrics
The first problem is defining what do you want to measure. "Trust" and "reputation" are two-argument functions and "value" is notoriously vague.
Replies from: gattsuru↑ comment by gattsuru · 2014-12-09T20:32:08.623Z · LW(p) · GW(p)
For clarity, I meant "trust" and "reputation" in the technical senses, where "trust" is authentication, and where "reputation" is an assessment or group of assessments for (ideally trusted) user ratings of another user.
But good point, especially for value systems.
Replies from: Lumifer↑ comment by Lumifer · 2014-12-09T21:10:41.302Z · LW(p) · GW(p)
I am still confused. When you say that trust is authentication, what is it that you authenticate? Do you mean trust in the same sense as "web of trust" in PGP-type crypto systems?
For reputation as an assessment of user ratings, you can obviously build a bunch of various metrics, but the real question is which one is the best. And that question implies another one: Best for what?
Note that weeding out idiots, sockpuppets, and trolls is much easier than constructing a useful-for-everyone ranking of legitimate users. Different people will expect and want your rankings to do different things.
Replies from: gattsuru↑ comment by gattsuru · 2014-12-09T23:34:00.943Z · LW(p) · GW(p)
what is it that you authenticate? Do you mean trust in the same sense as "web of trust" in PGP-type crypto systems?
For starters, a system to be sure that a user or service is the same user or service it was previously. Web of trusts /or/ a central authority would work, but honestly we run into limits even before the gap between electronic worlds and meatspace. PGP would be nice, but PGP itself is closed-source, and neither PGP nor OpenPGP/GPG are user-accessible enough to even survive in the e-mail sphere they were originally intended to operate. SSL allows for server authentication (ignoring the technical issues), but isn't great for user authentication.
I'm not aware of any generalized implementation for other use, and the closest precursors (keychain management in Murmur/Mumble server control?) are both limited and intended to be application-specific. But at the same time, I recognize that I don't follow the security or open-source worlds as much as I should.
For reputation as an assessment of user ratings, you can obviously build a bunch of various metrics, but the real question is which one is the best. And that question implies another one: Best for what?
Oh, yeah. It's not an easy problem to solve Right.
I'm more interested in if anyone's trying to solve it. I can see a lot of issues with a user-based reputation even in addition to the obvious limitation and tradeoffs that fubarobfusco provides -- a visible metric is more prone to being gamed but obscuring the metric reduces its utility as a feedback for 'good' posting, value drift without a defined root versus possible closure without, so on.
What surprises me is that there are so few attempts to improve the system beyond the basics. IP.Board, vBulletin, and phpBoard plugins are usually pretty similar -- the best I've seen merely lets you disable them on a per-subfora basis rather than globally, and they otherwise use a single point score. Reddit uses the same Karma system whether you're answering a complex scientific question or making a bad joke. LessWrong improves on that only by allowing users to see how contentious a comment's scoring. Discourse uses count of posts and tags, almost embarrassingly minimalistic. I've seen a few systems that make moderator and admin 'likes' count for more. I think that's about the fanciest.
I don't expect them to have an implementation that matches my desires, but I'm really surprised that there's no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds. These might even be /bad/ decisions, but usually you see someone making them.
I expect Twitter or FaceBook have something complex underneath the hood, but if they do, they're not talking about the specifics and not doing a very good job. Maybe its their dominance in the social development community, but I dunno.
Replies from: Lumifer, iamthelowercase↑ comment by Lumifer · 2014-12-10T02:00:48.037Z · LW(p) · GW(p)
For starters, a system to be sure that a user or service is the same user or service it was previously.
That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?
You don't need a web of trust or any central authority to verify that the user named X is in possession of a private key which the user named X had before.
I'm more interested in if anyone's trying to solve it.
Well, again, the critical question is: What are you really trying to achieve?
If you want the online equivalent of the meatspace reputation, well, first meatspace reputation does not exist as one convenient number, and second it's still a two-argument function.
there's no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds.
Once again, with feeling :-D -- to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different. You can build an automated system to suit your fancy, but there's no guarantee (and, actually, a pretty solid bet) that it won't suit other people well.
I expect Twitter or FaceBook have something complex underneath the hood
Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.
Replies from: fubarobfusco, gattsuru↑ comment by fubarobfusco · 2014-12-10T02:30:11.017Z · LW(p) · GW(p)
That seems to be pretty trivial. What's wrong with a username/password combo (besides all the usual things)
"All the usual things" are many, and some of them are quite wrong indeed.
If you need solid long-term authentication, outsource it to someone whose business depends on doing it right. Google for instance is really quite good at detecting unauthorized use of an account (i.e. your Gmail getting hacked). It's better (for a number of reasons) not to be beholden to a single authentication provider, though, which is why there are things like OpenID Connect that let users authenticate using Google, Facebook, or various other sources.
On the other hand, if you need authorization without (much) authentication — for instance, to let anonymous users delete their own posts, but not other people's — maybe you want tripcodes.
And if you need to detect sock puppets (one person pretending to be several people), you may have an easy time or you may be in hard machine-learning territory. (See the obvious recent thread for more.) Some services — like Wikipedia — seem to attract some really dedicated puppeteers.
↑ comment by gattsuru · 2014-12-15T21:00:47.672Z · LW(p) · GW(p)
What's wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?
In addition to the usual problems, which are pretty serious to start with, you're relying on the client. To borrow from information security, the client is in the hands of the enemy. Sockpuppet (sybil in trust networks) attacks, where entity pretends to be many different users (aka sockpuppets), and impersonation attacks, where a user pretends to be someone they are not, are both well-documented and exceptionally common. Every forum package I can find relies on social taboos or simply ignoring the problem, followed by direct human administrator intervention, and most don't even make administrator intervention easy.
There are also very few sites that have integrated support for private-key-like technologies, and most forum packages are not readily compatible with even all password managers.
This isn't a problem that can be perfectly solved, true. But right now it's not even got bandaids.
Once again, with feeling :-D -- to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different.
"Normal" social reputation runs into pretty significant issues as soon as your group size exceeds even fairly small groups -- I can imagine folk who could handle a couple thousand names, but it's common for a site to have orders of magnitude more users. These systems can provide useful tools for noticing and handling matters that are much more evident in pure data than in "expert judgments". But these are relatively minor benefits.
At a deeper level, a well-formed reputation system should encourage 'good' posting (posting that matches the expressed desires of the forum community) and discourage 'bad' posts (posting that goes against the expressed desires of the forum community), as well as reduce incentives toward me-too or this-is-wrong-stop responses.
This isn't without trade-offs : you'll implicitly make the forum's culture drift more slowly, and encourage surviving dissenters to be contrarians for whom the reputation system doesn't matter. But the existing reputation systems don't let you make that trade-off, and instead you have to decide whether to use a far more naive system that is very vulnerable to attack.
You can build an automated system to suit your fancy, but there's no guarantee (and, actually, a pretty solid bet) that it won't suit other people well.
To some extent -- spell-check and capitalization expectations for a writing community will be different than that of a video game or chemistry forum, help forums will expect shorter-lifespan users than the median community -- but a sizable number of these aspects are common to nearly all communities.
Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.
They have incentives toward keeping users. "Bad" posters are tautologically a disincentive for most users (exceptions: some folk do show revealed preferences for hearing from terrible people).
Replies from: Lumifer↑ comment by Lumifer · 2014-12-15T21:21:36.992Z · LW(p) · GW(p)
the client is in the hands of the enemy
Yes, of course, but if we start to talk in these terms, the first in line is the standard question: What is your threat model?
I also don't think there's a good solution to sockpuppetry short of mandatory biometrics.
But the existing reputation systems don't let you make that trade-off
Why not? The trade-off is in the details of how much reputation matters. There is a large space between reputation being just a number that's not used anywhere and reputation determining what, how, and when can you post.
very vulnerable to attack
Attack? Again, threat model, please.
"Bad" posters are tautologically a disincentive for most users
Not if you can trivially easy block/ignore them which is the case for Twitter and FB.
Replies from: gattsuru↑ comment by gattsuru · 2014-12-15T23:57:16.218Z · LW(p) · GW(p)
What is your threat model?
An attacker creates a large number of nodes and overwhelms any signal in the initial system.
For the specific example of a reddit-based forum, it's trivial for an attacker to make up a sizable proportion of assigned reputation points through the use of sockpuppets. It is only moderately difficult for an attacker to automate the time-consuming portions of this process.
I also don't think there's a good solution to sockpuppetry short of mandatory biometrics.
10% of the problem is hard. That does not explain the small amount of work done on the other 90%. The vast majority of sockpuppets aren't that complicated: most don't use VPNs or anonymizers, most don't use large stylistic variation, and many even use the same browser from one persona to the next. It's also common for a sockpuppets to have certain network attributes in common with their original persona. Full authorship analysis has both structural (primarily training bias) and pragmatic (CPU time) limitations that would make it unfeasible for large forums...
But there are a number of fairly simple steps to fight sockpuppets that computers handle better than humans, and yet still require often-unpleasant manual work to check.
Why not? The trade-off is in the details of how much reputation matters. There is a large space between reputation being just a number that's not used anywhere and reputation determining what, how, and when can you post.
Yes, but there aren't open-source systems that exist and have documentation which do these things beyond the most basic level. At most, there are simple reputation systems where a small amount has an impact on site functionality, such as this site. But Reddit's codebase does not allow upvotes to be limited or weighed based on the age of account, does not have , and would require pretty significant work to change any of these attributes. (The main site at least acts against some of the more overt mass-downvoting by acting against downvotes applied to the profile page, but this doesn't seem present here?)
Not if you can trivially easy block/ignore them which is the case for Twitter and FB.
If a large enough percentage of outside user content is "bad", users begin to treat that space as advertising and ignore it. Many forums also don't make it easy to block users (see : here), and almost none handle blocking even the most overt of sockpuppets well.
Replies from: alienist, Lumifer↑ comment by alienist · 2014-12-19T06:11:28.516Z · LW(p) · GW(p)
An attacker creates a large number of nodes and overwhelms any signal in the initial system.
For the specific example of a reddit-based forum, it's trivial for an attacker to make up a sizable proportion of assigned reputation points through the use of sockpuppets. It is only moderately difficult for an attacker to automate the time-consuming portions of this process.
Limit the ability of low karma users to upvote.
↑ comment by Lumifer · 2014-12-16T01:34:54.478Z · LW(p) · GW(p)
You seem to want to build a massive sledgehammer-wielding mech to solve the problem of fruit flies on a banana.
So the attacker expends a not inconsiderable amount of effort to build his sockpuppet army and achieves sky-high karma on a forum. And..? It's not like you can sell karma or even gain respect for your posts from other than newbies. What would be the point?
Not to mention that there is a lot of empirical evidence out there -- formal reputation systems on forums go back at least as far as early Slashdot and y'know? they kinda work. They don't achieve anything spectacular, but they also tend not have massive failure modes. Once the sockpuppet general gains the attention of an admin or at least a moderator, his army is useless.
You want to write a library which will attempt to identify sockpuppets through some kind of multifactor analysis? Sure, that would be a nice thing to have -- as long as it's reasonable about things. One of the problems with automated defense mechanisms is that they can be often used as DOS tools if the admin is not careful.
If a large enough percentage of outside user content is "bad"
That still actually is the case for Twitter and FB.
↑ comment by iamthelowercase · 2015-06-12T16:25:04.840Z · LW(p) · GW(p)
Inre: Facebook/Twitter:
TL;DR I think Twitter Facebook et al do have something complex, but it is outside the hood rather than under it. (I guess they could have both.)
The "friending" system takes advantage of human's built-in reputation system. When I look at X's user page, it tells me that W, Y, and Z also follow/"friended" X. Then when I make my judgement of X, X leaches some amount of "free" "reputation points" from Z's "reputation". Of course, if W, Y, and Z all have bad reputations, that is reflected. Maybe W and Z have good reputations, but Y does not -- now I'm not sure what X's reputation should be like and need to look at X more closely.
Of course, this doesn't scale beyond a couple hundred people.
↑ comment by fubarobfusco · 2014-12-08T23:32:41.754Z · LW(p) · GW(p)
I don't know of one. I doubt that everyone wants the same sort of thing out of such a metric. Just off the top of my head, some possible conflicts:
- Is a post good because it attracts a lot of responses? Then a flamebait post that riles people into an unproductive squabble is a good post.
- Is a post good because it leads to increased readership? Then spamming other forums to promote a post makes it a better post, and posting porn (or something else irrelevant that attracts attention) is really very good.
- Is a post good because a lot of users upvote it? Then people who create sock-puppet accounts to upvote themselves are better posters; as are people who recruit their friends to mass-upvote their posts.
- Is a post good because the moderator approves of it? Then as the forum becomes more popular, if the moderator has no additional time to review posts, a diminishing fraction of posts are good.
The old wiki-oid site Everything2 explicitly assigns "levels" to users, based on how popular their posts are. Users who have proven themselves have the ability to signal-boost posts they like with a super-upvote.
It seems to me that something analogous to PageRank would be an interesting approach: the estimated quality of a post is specifically an estimate of how likely a high-quality forum member is to appreciate that post. Long-term high-quality posters' upvotes should probably count for a lot more than newcomers' votes. And moderators or other central, core-team users should probably be able to manually adjust a poster's quality score to compensate for things like a formerly-good poster going off the deep end, the revelation that someone is a troll or saboteur, or (in the positive direction) someone of known-good offline reputation joining the forum.
comment by CBHacking · 2014-12-08T18:24:50.609Z · LW(p) · GW(p)
Can anybody give me a good description of the term "metaphysical" or "metaphysics" in a way that is likely to stick in my head and be applicable to future contemplations and conversations? I have tried to read a few definitions and descriptions, but I've never been able to really grok any of them and even when I thought I had a working definition it slipped out of my head when I tried to use it later. Right now its default function in my brain is, when uttered, to raise a flag that signifies "I can't tell if this person is speaking at a level significantly above my comprehension or is just spouting bullshit, but either way I'm not likely to make sense of what they're saying" and therefore tends to just kind of kill the mental process that that was trying to follow what somebody was saying to me / what I was reading.
Given how often it comes up, and often from people I respect, I'm pretty sure that's not the correct behavior Figured it's worth asking here. In case it wasn't obvious, I have virtually no background in philosophy (though I've been looking to change that).
Replies from: Anatoly_Vorobey, gjm, TheOtherDave↑ comment by Anatoly_Vorobey · 2014-12-08T18:39:43.423Z · LW(p) · GW(p)
Metaphysics: what's out there? Epistemology: how do I learn about it? Ethics: what should I do with it?
Basically, think of any questions that are of the form "what's there in the world", "what is the world made of", and now take away actual science. What's left is metaphysics. "Is the world real or a figment of my imagination?", "is there such a thing as a soul?", "is there such a thing as the color blue, as opposed to objects that are blue or not blue?", "is there life after death?", "are there higher beings?", "can infinity exist?", etc. etc.
Note that "metaphysical" also tends to be used as a feel-good word, meaning something like "nobly philosophical, concerned with questions of a higher nature than the everyday and the mundane".
Replies from: polymathwannabe, None, CBHacking↑ comment by polymathwannabe · 2014-12-08T18:40:38.929Z · LW(p) · GW(p)
Metaphysics: what's out there?
Isn't that ontology? What's the difference?
Replies from: Anatoly_Vorobey, ChristianKl↑ comment by Anatoly_Vorobey · 2014-12-08T18:54:11.867Z · LW(p) · GW(p)
"Ontology" is firmly dedicated to "exist or doesn't exist". Metaphysics is more broadly "what's the world like?" and includes ontology as a central subfield.
Whether there is free will is a metaphysical question, but not, I think, an ontological one (at least not necessarily). "Free will" is not a thing or a category or a property, it's a claim that in some broad aspects the world is like this and not like that.
Whether such things as desires or intentions exist or are made-up fictions is an ontological question.
Replies from: Gvaerg↑ comment by ChristianKl · 2014-12-08T18:45:31.609Z · LW(p) · GW(p)
Ontology is a subdiscipline of metaphysics.
Is the many-world hypothesis true? Might be a metaphysical question that not directly ontology.
↑ comment by [deleted] · 2014-12-08T23:23:28.844Z · LW(p) · GW(p)
A confusion of mine: How is epistemology a separate thing? Or is that just a flag for "We're going to go meta-level" and applied to some particular topic.
E.g. I read a bit of Kant about experience, which I suppose is metaphysics (right?) but it seems like if he's making any positive claim, the debate about the claim is going to be about the arguments for the claim, which is settled via epistemology?
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2014-12-09T07:47:35.272Z · LW(p) · GW(p)
Hmm, I would disagree. If you have a metaphysical claim, then arguments for or against this claim are not normally epistemological; they're just arguments.
Think of epistemology as "being meta about knowledge, all the time, and nothing else".
What does it mean to know something? How can we know something? What's the difference between "knowing" a definition and "knowing" a theorem? Are there statements such that to know them true, you need no input from the outside world at all? (Kant's analytic vs synthetic distinction). Is 2+2=4 one such? If you know something is true, but it turns out later it was false, did you actually "know" it? (many millions of words have been written on this question alone).
Now, take some metaphysical claim, and let's take an especially grand one, say "God is infinite and omnipresent" or something. You could argue for or against that claim without ever going into epistemology. You could maybe argue that the idea of God as absolute perfection more or less requires Him to be present everywhere, in the smallest atom and the remotest star, at all times because otherwise it would be short of perfection, or something like this. Or you could say that if God is present everywhere, that's the same as if He was present nowhere, because presence manifests by the difference between presence and absence.
But of course if you are a modern person and especially one inclined to scientific thinking, you would likely respond to all this "Hey, what does it even mean to say all this or for me to argue this? How would I know if God is omnipresent or not omnipresent, what would change in the world for me to perceive it? Without some sort of epistemological underpinning to this claim, what's the difference between it and a string of empty words?"
And then you would be proceeding in the tradition started by Descartes, who arguably moved the center of philosophical thinking from metaphysics to epistemology in what's called the "epistemological turn", later boosted in the 20th century by the "lingustic turn" (attributed among others to Wittgenstein).
Metaphysics: X, amirite? Epistemological turn: What does it even mean to know X? Linguistic turn: What does it even mean to say X?
↑ comment by CBHacking · 2014-12-08T21:54:15.306Z · LW(p) · GW(p)
Thanks. That's still not even a little intuitive to me, but it's a Monday and I had to be up absurdly early, so if it makes any sense to me right now (and it does), I have hope that I'll be able to internalize it even if I always need to think about it a bit. We'll see, probably no sooner than tomorrow though (sleeeeeeeeeep...).
I suspect that part of my problem is that I keep trying to decompose "metaphysics" into "physics about/describing/in the area of physics" and my brain helpfully points out that not only is it questionable whether that makes any sense to begin with, it almost never makes any sense whatsoever in context. If I just need to install a linguistic override for that word, I can do it, but I want to know what the override is supposed to be before I go to the effort.
The feel-good-word meaning seems likely to be a close relative of the flag-statement-as-bullshit meaning. That feels like a mental trap, though. The problem is, at least half the "concrete" examples that I've seen in this thread also seem likely to have little to no utility (certainly not enough to justify thinking about it for any length of time). Epistemology and ethics have obvious value, but it seems metaphysics comes up all the time in philosophical discussion too.
↑ comment by gjm · 2014-12-08T19:56:56.884Z · LW(p) · GW(p)
This is in no way an answer to your actual question (Anatoly's is good) but it might amuse you.
"Meta" in Greek means something like "after" (but also "beside", "among", and various other things). So there is a
Common misapprehension: metaphysics is so called because it goes beyond physics -- it's mode abstract, more subtle, more elevated, more fundamental, etc.
This turns out not to be quite where the word comes from, so there is a
Common response": actually, it's all because Aristotle wrote a book called "Physics" and another, for which he left no title, that was commonly shelved after the "Physics" -- meta ta Phusika* -- and was commonly called the "Metaphysics". And the topics treated in that book came to be called by that name. So the "meta" in the name really has nothing at all to do with the relationship between the subjects.
But actually it's a bit more complicated than that; here's the
Truth (so far as I understand it): indeed Aristotle wrote those books, and indeed the "Metaphysics" is concerned with, well, metaphysics, and indeed the "Metaphysics" is called that because it comes "after the Physics". But the earliest sources we have suggest that the reason why the Metaphysics came after the Physics is that Aristotle thought it was important for physics to be taught first. So actually it's not far off to say that metaphysics is so called because it goes beyond physics, at least in the sense of being a more advanced topic (in Aristotle's time).
↑ comment by TheOtherDave · 2014-12-08T21:01:07.895Z · LW(p) · GW(p)
In my experience people use "metaphysics" to refer to philosophical exploration of what kinds of things exist and what the nature, behavior, etc. of those things is.
This is usually treated as distinct from scientific/experimental exploration of what kinds of things exist and what the nature, behavior, etc. of those things is, although those lines are blurry. So, for example, when Yudkowsky cites Barbour discussing the configuration spaces underlying experienced reality, there will be some disagreement/confusion about whether this is a conversation about physics or metaphysics, and it's not clear that there's a fact of the matter.
This is also usually treated as distinct from exploration of objects and experiences that present themselves to our senses and our intuitive reasoning... e.g. shoes and ducks and chocolate cake. As a consequence, describing a thought or worldview or other cognitive act as "metaphysical" can become a status maneuver... a way of distinguishing it from object-level cognition in an implied context where more object-level (aka "superficial") cognition is seen as less sophisticated or deep or otherwise less valuable.
Some people also use "metaphysical" to refer to a class of events also sometimes referred to as "mystical," "occult," "supernatural," etc. Sometimes this usage is consistent with the above -- that is, sometimes people are articulating a model of the world in which those events can best be understood by understanding the reality which underlies our experience of the world.
Other times it's at best metaphorical, or just outright bullshit.
As far as correct behavior goes... asking people to taboo "metaphysical" is often helpful.
Replies from: CBHacking↑ comment by CBHacking · 2014-12-08T22:10:13.031Z · LW(p) · GW(p)
The rationalist taboo is one of the tools I have most enjoyed learning and found most useful in face-to-face conversations since discovering the Sequences. Unfortunately, it's not practical when dealing with mass-broadcast or time-shifted material, which makes it of limited use in dealing with the scenarios where I most frequently encounter the concept of metaphysics.
I tend to (over)react poorly to status maneuvers, which is probably part of why I've had a hard time with the word; it gets used in an information-free way sufficiently often that I'm tempted to just always shelve it there, and that in turn leads me to discount or even ignore the entire thought which contained it. This is a bias I'm actively trying to brainhack away, and I'm now tempted to go find some of my philosophically-inclined social circle and see if I can avoid that automatic reaction at least where this specific word is concerned (and then taboo it anyhow, for the sake of communication being informative).
I still haven't fully internalized the concept, but I'm getting closer. "The kinds of things that exist, and their natures" is something I can see a use for, and hopefully I can make it stick in my head this time.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2014-12-09T19:24:51.969Z · LW(p) · GW(p)
it gets used in an information-free way sufficiently often that I'm tempted to just always shelve it there, and that in turn leads me to discount or even ignore the entire thought which contained it.
This seems like a broader concern, and one worth addressing. People drop content-free words into their speech/writing all the time, either as filler or as "leftovers" from precursor sentences.
What happens if you treat it as an empty modifier, like "really" or "totally"?
Replies from: CBHacking↑ comment by CBHacking · 2014-12-09T23:20:57.278Z · LW(p) · GW(p)
Leaving aside the fact that, by default, I don't consider "totally" to be content-free (I'm aware a lot of people use it that way, but I still often need to consciously discard the word when I encounter it), that still seems like at best it only works when used as a modifier. It doesn't help if somebody is actually talking about metaphysics. I'll keep it in mind as a backup option, though; "if I can't process that sentence when I include all the words they said, and one of them is 'metaphysical', what happens if I drop that word?"
comment by artemium · 2014-12-17T06:59:56.400Z · LW(p) · GW(p)
Ok I have one meta-level super-stupid question . Would it be possible to improve some aspects of the LessWrong webpage? Like making it more readable for mobile devices? Every time I read LW in the tram while going to work I go insane trying to hit super-small links on the website. As I work in Web development/UI design, I would volunteer to work on this. I think in general that the LW website is a bit outdated in terms of both design and functionality, but I presume that this is not considered a priority. However a better readability on mobile screens would be a positive contribution to its purpose.
comment by torekp · 2014-12-10T00:11:03.252Z · LW(p) · GW(p)
True, false, or neither?: It is currently an open/controversial/speculative question in physics whether time is discretized.
Replies from: polymathwannabe, Grothor↑ comment by polymathwannabe · 2014-12-10T01:37:28.346Z · LW(p) · GW(p)
The Wikipedia article on Planck time says:
Theoretically, this is the smallest time measurement that will ever be possible, roughly 10^−43 seconds. Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change.
However, the article on Chronon says:
Replies from: iamthelowercaseThe Planck time is a theoretical lower-bound on the length of time that could exist between two connected events, but it is not a quantization of time itself since there is no requirement that the time between two events be separated by a discrete number of Planck times.
↑ comment by iamthelowercase · 2015-06-12T08:20:30.393Z · LW(p) · GW(p)
So, if I understand this rightly-
Any two events must take place at least one Plank time apart. But so long as they do, it can be any number of plank times -- even, say, pi. Right?
↑ comment by Richard Korzekwa (Grothor) · 2014-12-10T05:08:48.069Z · LW(p) · GW(p)
Many things in our best models of physics are discrete, but as far as I know, our coordinates (time, space, or four-dimensional space-time coordinates) are never discrete. Even something like quantum field theory, which treats things in a non-intuitively discrete way does not do this. For example, we might view the process of an electron scattering off another electron as an exchange of many discrete photons between the two electrons, but it is all written in terms of integrals or derivatives, rather than differences or sums.
comment by Toggle · 2014-12-09T22:37:17.541Z · LW(p) · GW(p)
Maneki Neko is a short story about an AI that manages a kind of gift economy. It's an enjoyable read.
I've been curious about this 'class' of systems for a while now, but I don't think I know enough about economics to ask the questions well. For example- the story supplies a superintelligence to function as a competent central manager, but could such a gift network theoretically exist without being centrally managed (and without trivially reducing to modern forms of currency exchange)? Could a variant of Watson be used to automate the distribution of capital in the same way that it makes a medical dignosis? And so on.
In particular, I'm looking for the intellectual tools that would be used to ask these questions in a more rigorous way; it would be great if I had better ways of figuring out which of these questions are obviously stupid and which are not. Specific disciplines in economics or game theory, perhaps. Things along the lines of LW's Mechanism Design sequence would be fantastic. Can anyone give me a few pointers?
Replies from: badger, Lumifer, ChristianKl↑ comment by badger · 2014-12-10T19:35:24.840Z · LW(p) · GW(p)
My intuition is every good allocation system will use prices somewhere, whether the users see them or not. The main perk of the story's economy is getting things you need without having to explicitly decide to buy them (ie the down-on-his-luck guy unexpectedly gifted his favorite coffee), and that could be implemented through individual AI agents rather than a central AI.
Fleshing out how this might play out, if I'm feeling sick, my AI agent notices and broadcasts a bid for hot soup. The agents of people nearby respond with offers. The lowest offer might come from someone already in a soup shop who lives next door to me since they'll hardly have to go out of their way. Their agent would notify them to buy something extra and deliver it to me. Once the task is fulfilled, my agent would send the agreed-upon payment. As long as the agents are well-calibrated to our needs and costs, it'd feel like a great gift even if there are auctions and payments behind the scenes.
For pointers, general equilibrium theory studies how to allocate all the goods in an economy. Depending on how you squint at the model, it could be studying centralized or decentralized markets based on money or pure exchange. A Toolbox for Economic Design is fairly accessible texbook on mechanism design that covers lots of allocation topics.
Replies from: Toggle↑ comment by Toggle · 2014-12-10T20:23:06.286Z · LW(p) · GW(p)
This looks very useful. Thanks!
Another one of those interesting questions is whether the pricing system must be equivalent to currency exchange. To what extent are the traditional modes of transaction a legacy of the limitations behind physical coinage, and what degrees of freedom are offered by ubiquitous computation and connectivity? Etc. (I have a lot of questions.)
Replies from: badger↑ comment by badger · 2014-12-10T21:09:08.326Z · LW(p) · GW(p)
Results like the Second Welfare Theorem (every efficient allocation can be implemented via competitive equilibrium after some lump-sum transfers) suggests it must be equivalent in theory.
Eric Budish has done some interesting work changing the course allocation system at Wharton to use general equilibrium theory behind the scenes. In the previous system, courses were allocated via a fake money auction where students had to actually make bids. In the new system, students submit preferences and the allocation is computed as the equilibrium starting from "equal incomes".
What benefits do you think a different system might provide, or what problems does monetary exchange have that you're trying to avoid? Extra computation and connectivity should just open opportunities for new markets and dynamic pricing, rather than suggest we need something new.
↑ comment by ChristianKl · 2014-12-10T15:22:30.637Z · LW(p) · GW(p)
Could a variant of Watson be used to automate the distribution of capital in the same way that it makes a medical dignosis?
The stock market has a lot of capable AIs that manage capital allocation.
Replies from: Toggle↑ comment by Toggle · 2014-12-10T19:15:30.976Z · LW(p) · GW(p)
Fair point. It's my understanding that this is limited to rapid day trades, with implications for the price of a stock but not cash-on-hand for the actual company. I was imagining something more like a helper algorithm for venture capital or angel investors, comparable to the PGMs underpinning the insurance industry.
comment by Dahlen · 2014-12-08T21:20:39.168Z · LW(p) · GW(p)
Is it possible even in principle to perform a "consciousness transfer" from one human body to another? On the same principle as mind uploading, only the mind ends up in another biological body rather than a computer. Can you transfer "software" from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism? If so, would the recipient organism come from a fully alive and functional human who would be basically killed for this purpose? Or bred for this purpose? Or would it require a complete brain transplant? (If so, how would neural structures found in the second body heal & connect with the transplanted brain so that a functional central nervous system results?) Wouldn't the person whose consciousness is being transferred experience some sort of personality change due to "inhabiting" a structurally different brain or body?
Is this whole hypothesis just an artifact of reminiscent introjected mind-body dualism, not compatible with modern science? Does the science world even know enough about consciousness and the brain to be able to answer this question?
I'm asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality. Unfortunately, I lack the necessary background knowledge to think coherently about this idea, so I figured there are many people on LW who don't, and could explain to me whether this whole idea makes sense.
Replies from: CBHacking, Alsadius, hyporational, ChristianKl, mwengler, Eniac↑ comment by CBHacking · 2014-12-08T22:58:14.299Z · LW(p) · GW(p)
I don't think anybody has hard evidence of answers to any of those questions yet (though I'd be fascinated to learn otherwise) but I can offer some conjectures:
Possible in principle? Yes. I see no evidence that sentience and identity are anything other than information stored in the nervous system, and in theory the cognitive portion of a nervous system an organ and could be transplanted like any other.
Preserving anatomical integrity? Not with anything like current science. We can take non-intrusive brain scans, but they're pretty low-resolution and (so far as I know) strictly read-only. Even simply stimulating parts of the brain isn't enough to basically re-write it in such a way that it becomes another person's brain.
Need to kill donors? To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that's still a potential human - the vegetativeness needs to be reversible for this to be useful - so the ethics are still highly questionable. It's probably possible to do it without a full brain at all, which seems less evil if you can somehow do it my some mechanism other than what amounts to a pre-natal full lobotomy, but would require the physical brain transplant option for transference.
Nerves connecting and healing? Nerves can repair themselves, though it's usually extremely slow. Stem cell therapies have potential here, though. Connecting the brain to the rest of the body is a lot of nerves, but they're pretty much all sensory and motor nerves so far as I know; the brain itself is fairly self-contained
Personality change? That depends on how different the new body is from the old, I would guess. The obviously-preferable body is a clone, for many reasons including avoiding the need to avoid immune system rejection of the new brain. Personality is always going to be somewhat externally-driven, so I wouldn't expect somebody transferred from a 90-year-old body to a 20-year-old one to have the same personality regardless of any other information because the body will just be younger. On the other hand, if you use a clone body that's the same age as the transferee, it wouldn't shock me if the personality didn't actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.
Now, mind you, I'm no brain surgeon (or medical professional of any sort), nor have I studied any significant amount of psychology. Nor am I a philosopher (see my question above). However, I don't really see how the mind could be anything except a characteristic of the body. Altering (intentionally or otherwise) the part of the body responsible for thought alters the mind. Our current attempted maps of the mind don't come close to fully representing the territory, but I firmly believe it is mappable. Whether an existing one is re-mappable I can't say, but the idea of transplanting a brain has been explored in science fiction for decades, and in theory I see ne logical reason why it couldn't work.
Replies from: Gunnar_Zarncke, Dahlen↑ comment by Gunnar_Zarncke · 2014-12-09T21:46:58.388Z · LW(p) · GW(p)
To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time.
I don't think this is currently possible. The body just wouldn't work. A large part of the 'wiring' during infant and childhood is connecting body parts and functions with higher and higher level concepts. Think about toilet training. You aren't even aware of how it works but it nonetheless somehow connects large scale planning (how urgent is it, when and where are toilets) to the actual control of the organs. Considering how differnt minds (including the connection to the body) are I think the minimum requirement (short of signularity-level interventions) is an identical twin.
That said I think the existing techniques for transferring motion from one brain to another combined with advanced hypnosis and drugs could conceivably developed to a point where it is possible to transfer noticable parts of your identity over to another body - at least over an extended period of time where the new brain 'learn' to be you. To also transfer memory is camparably easy. Whether the result can be called 'you' or is sufficiently alike to you is another question.
↑ comment by Dahlen · 2014-12-12T01:14:05.391Z · LW(p) · GW(p)
Need to kill donors? To the best of my knowledge, it's theoretically possible to basically mature a human body including a potentially-functional brain, while keeping that brain in a vegetative state the entire time. Of course, that's still a potential human - the vegetativeness needs to be reversible for this to be useful - so the ethics are still highly questionable.
That's how I pictured it, yes. At this point I wouldn't concern myself with the ethics of it, because, if our technology advances this much, then simply the fact that humanity can perform such a feat is an extremely positive thing, and probably the end of death as we know it. What worries me more is that this wouldn't result in a functional mature individual. For instance: in order to develop the muscular system, the body's skeletal muscles would have to experience some sort of stress, i.e. be used. If you grow the organism in a jar from birth to consciousness transfer (as is probably most ethical), it wouldn't have moved at all its entire life up to that point, and would therefore have extremely weak musculature. What to do in the meantime, electrically stimulate the muscles? Maybe, but it probably wouldn't have results comparable to natural usage. Besides, there are probably many other body subsystems that would suffer similarly without much you could do about it. See Gunnar Zarncke's comment below.
On the other hand, if you use a clone body that's the same age as the transferee, it wouldn't shock me if the personality didn't actually change significantly; it should basically feel like going under for surgery and then coming out again with nothing changed.
Yes, but I imagine most uses to be related to rejuvenation. It would mean that the genetic info required for cloning would have to be gathered basically at birth (and the cloning process begun shortly thereafter), and there would still be a 9-month age difference. There's little point in growing a backup clone for an organism so soon after birth. An age difference of 20 years between person and clone seems more reasonable.
↑ comment by Alsadius · 2014-12-09T00:09:43.465Z · LW(p) · GW(p)
In order to provide a definite answer to this question, we'd need to know how the brain produces consciousness and personality, as well as the exact mechanism of the upload(e.g., can it rewire synapses?).
Replies from: Tem42↑ comment by Tem42 · 2015-07-04T04:02:17.963Z · LW(p) · GW(p)
Not exactly true; we probably don't need to know how consciousness arises. We would certainly have to rewire synapses to match the original brain, and it is likely that if we exactly replicate brain structure neuron by neuron, synapse by synapse, we would still not know where consciousness lies, but would have a conscious duplicate of the original.
Alternatively you could hypothesize a soul, but that seems like worry for worry's sake.
The flip side to this is that there is no measurable difference between 'someone who is you and feels conscious' and 'someone who is exactly like you in every way but does not feel conscious (but will continue to claim that e does)'. Even if you identified a mental state on a brain scan that you felt certain that was causing the experience of consciousness, in order to approximate a proof of this you would have to be able to measure a group of subjects that are nearly identical except not experiencing consciousness, a group that has not yet been found in nature.
↑ comment by hyporational · 2014-12-15T02:33:26.977Z · LW(p) · GW(p)
Can you transfer "software" from one brain to another in a purely informational way, while preserving the anatomical integrity of the second organism?
This can already be done via the senses. This also transfers consciousness of the content that is being transferred. What would consciousness without content look like?
↑ comment by ChristianKl · 2014-12-08T22:52:14.290Z · LW(p) · GW(p)
There no such thing as "purely informational" when it comes to brains.
I'm asking this because ever since I found out about ems and mind uploading, having minds moved to bodies rather than computers seemed to me a more appealing hypothetical solution to the problem of death/mortality.
If you want to focus on that problem it's likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.
Replies from: Dahlen↑ comment by Dahlen · 2014-12-12T01:24:22.681Z · LW(p) · GW(p)
There no such thing as "purely informational" when it comes to brains.
It's good to know, but can you elaborate more on this in the context of the grandparent comment? Perhaps with an analogy to computers.
If you want to focus on that problem it's likely easier to simply fix up whatever is wrong in the body you are starting with than doing complex uploading.
It occurred to me too, but I'm not sure this is the definite conclusion. Fully healing an aging organism suffering from at least one severe disease, while more reasonably closer to current medical technology, wouldn't leave the patient in as good a state as simply moving to a 20-year-old body.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-12T12:08:55.233Z · LW(p) · GW(p)
It's good to know, but can you elaborate more on this in the context of the grandparent comment? Perhaps with an analogy to computers.
Brains are no computers.
Fully healing an aging organism suffering from at least one severe disease, while more reasonably closer to current medical technology, wouldn't leave the patient in as good a state as simply moving to a 20-year-old body.
Of course you wouldn't only heal one severage disease. You would also lengthen telomeres and do all sorts of other things that reduce aging effects.
↑ comment by mwengler · 2014-12-11T16:35:24.218Z · LW(p) · GW(p)
Suppose all the memories in one person were wiped and replaced with your memories. I believe the new body would claim to be you. It would introspect as you might now, and find your memories as its own, and say "I am Dahlen in a new body."
But would it be you? If the copying had been non-destructive, then Dahlen in the old body still exists and would "know" on meeting Dahlen in the new body that Dahlen in the new body was really someone else who just got all Dahlen's memories up to that point.
Meanwhile, Dahlen in the new body would have capabilities, moods, reactions, which would depend on the substrate more than the memories. The functional parts of the brain, the wiring-other-than-memories as it were, would be different in the new body. Dahlen in the new body would probably behave in ways that were similar to how the old body with its old memories behaved. It would still think it was Dahlen, but as Dahlen in the old body might think, that would just be its opinion and obviously it is mistaken.
As to uploading, it is more than the brain that needs to be emulated. We have hormonal systems that mediate fear and joy and probably a broad range of other feelings. I have a sense of my body that I am in some sense constantly aware of which would have to be simulated and would probably be different in an em of me than it is in me, just as it would be different if my memories were put in another body.
Would anybody other than Dahlen in the old body have a reason to doubt that Dahlen in the new body wasn't really Dahlen? I don't think so, and especially Dahlen in the new body would probably be pretty sure it was Dahlen, even if it claimed to rationally understand how it might not be. It would know it was somebody, and wouldn't be able to come up with any other compelling idea for who it was other than Dahlen.
Replies from: Dahlen↑ comment by Dahlen · 2014-12-12T00:53:39.074Z · LW(p) · GW(p)
I understand all this. And it's precisely the sort of personality preservation that I find largely useless and would like to avoid. I'm not talking about copying memories from one brain to another; I'm talking about preserving the sense of self in such a way that the person undergoing this procedure would have the following subjective experience: be anesthetized (probably), undergo surgery (because I picture it as some form of surgery), "wake up in new body". (The old body would likely get buried, because the whole purpose of performing such a transfer would be to save dying -- very old or terminally ill -- people's lives.) There would be only one extant copy of that person's memories, and yet they wouldn't "die"; there would be the same sort of continuity of self experienced by people before and after going to sleep. The one who would "die" is technically the person in the body which constitutes the recipient of the transfer (who may have been grown just for this purpose and kept unconscious its whole life). That's what I mean. Think of it as more or less what happens to the main character in the movie Avatar.
I realize the whole thing doesn't sound very scientific, but have I managed to get my point across?
As to uploading, it is more than the brain that needs to be emulated. We have hormonal systems that mediate fear and joy and probably a broad range of other feelings. I have a sense of my body that I am in some sense constantly aware of which would have to be simulated and would probably be different in an em of me than it is in me, just as it would be different if my memories were put in another body.
Yes, but... Everybody's physiological basis for feelings is more or less the same; granted, there are structural differences that cause variation in innate personality traits and other mental functions, and a different brain might employ the body's neurotransmitter reserve in different ways (I think), but the whole system is sufficiently similar from human to human that we can relate to each other's experiences. There would be differences, and the differences would cause the person to behave differently in the "new body" than it did in the "old body", but I don't think one would have to move the glands or limbic system or what-have-you in addition to just the brain.
Replies from: mwengler↑ comment by mwengler · 2014-12-12T14:28:11.112Z · LW(p) · GW(p)
I understand what you are going for. And I present the following problem with it.
Dahlen A is put to unconscious. While A is unconscious memories are completely copied to unconscious body B. Dahlen B is woken up. Your scenario is fulfilled, Dahlen B has entirely the memories of being put to sleep in body A and waking up in body B. Dahlen B examines his memories and sees no gap in his existence other than the "normal" one of the anesthesis to render Dahlen A unconscious. Your desires for a transfer scenario are fulfilled!
Scenario 1: Dahlen A is killed while unconscious and body disposed of. Nothing ever interferes with the perception of Dahlen A and everyone around that there has been a transfer of consciousness from Dahlen A to Dahlen B.
Scenario 2: A few days later Dahlen A is woken up. Dahlen A of course has the sense of continuous consciousness just as he would if he had undergone a gall bladder surgery. Dahlen A and Dahlen B are brought together with other friends of Dahlen. Dahlen A is introspectively sure that he is the "real" Dahlen and no transfer ever took place. Dahlen B is introspectively sure that he is the "real" Dahlen and that a transfer did take place.
Your scenario assumes that there can be only one Dahlen. That the essence of Dahlen is a unique thing in the universe, and that it cannot be copied so that there are two. I think this assumption is false. I think if you make a "good enough" copy of Dahlen that you will have two essences of Dahlen, and that at no point does a single essence of Dahlen exist, and move from one body to another.
Further, if I am right and the essence of Dahlen can be copied, multiplied, and each possessor of a copy has the complete introspective property of seeing that it is in fact Dahlen, then it is unscientific to think that in the absence of copying, that your day to day existence is anything more than this. That each day you wake up, each moment you experience, your "continuity" is something you experience subjectively as a current state due to your examination of your memories. More important, your continuity is NOT something "real," not something which either other observers, or even yourself and your copies introspecting from within the brain of Dahlen A, B, C etc. can ever distinguish from "real" continuity vs just the sense of continuity which follows from a good quality memory copy.
That there is a single essence of Dahlen which normally stays in one body, but which can be moved from one body to another, or into a machine, I believe is a false assumption, and that it is falsified by these thought experiments. As much as you and I might like to believe there is an essential continuity which we preserve as long as we stay alive, a rational examination of how we experience that continuity shows that it is not a real continuity, that copies could be created which would experience that continuity in as real a sense as the original whether or not the original is kept around.
Replies from: Jiro↑ comment by Jiro · 2014-12-12T16:57:27.624Z · LW(p) · GW(p)
By this reasoning, isn't it okay to kill someone (or at least to kill them in their sleep)? After all, if everyone's life is a constant sequence of different entities, what you're killing would have ceased existing anyway. You're just preventing a new entity from coming into existence. But preventing a new entity from coming into existence isn't murder, even if the new entity resembles a previous one.
Replies from: mwengler↑ comment by mwengler · 2014-12-12T17:47:37.779Z · LW(p) · GW(p)
By this reasoning, isn't it okay to kill someone (or at least to kill them in their sleep)?
You tell me.
If you don't like the moral implications of a certain hypothesis, this should have precisely zero effect on your estimation of the probability that this hypothesis is correct. The entire history of the growing acceptance of evolution as a "true" theory follows precisely this course. Many people HATED the implication that man is just another animal. That a sentiment for morality evolved because groups in which that sentiment existed were able to out-compete groups in which that sentiment was weaker. That the statue of David or the theory of General Relativity, or the love you feel for your mother or your dog arise as a consequence, ultimately, of mindless random variations producing populations from which some do better than others and pass down the variations they have to the next generation.
So if the implications of the continuity of consciousness are morally distasteful to you, do not make the mistake of thinking that makes them any less likely to be true. A study of science and scientific progress should cure you of this very human tendency.
Replies from: Jiro↑ comment by Jiro · 2014-12-13T00:20:55.322Z · LW(p) · GW(p)
If your reasoning implies ~X, then X implies that your reasoning is wrong. And if X implies that your reasoning is wrong, then evidence for X is evidence against your reasoning.
In other words, you have no idea what you are talking about. The fact that something has "distasteful implications" (that is, that it implies ~X, and there is evidence for X) does mean it is less likely to be true.
Replies from: mwengler, mwengler, Tem42↑ comment by mwengler · 2014-12-13T08:44:55.305Z · LW(p) · GW(p)
Historically, the hypothesis that the earth orbited the sun had the distasteful implications that we were not the center of the universe. Galileo was prosecuted for this belief and recanted it under threat. I am surprised that you think the distasteful implications for this belief were evidence that the earth did not in fact orbit the sun.
Historically the hypothesis that humans evolved from non-human animals had the distasteful implications that humans had not been created by god in his image and provided with immortal souls by god. I am surprised that you consider this distaste to be evidence that evolution is an incorrect theory of the origin of species, including our own.
This is a rationality message board, devoted to, among other things, listing the common mistakes that humans make in trying to determine the truth. I would have bet dollars against donuts that rejecting the truth of a hypothesis because its implications were distasteful would have been an obvious candidate for that list, and I would have apparently lost.
Replies from: Jiro↑ comment by Jiro · 2014-12-13T16:51:41.525Z · LW(p) · GW(p)
If you had reason to believe that the Earth is the center of the universe, the fact that orbiting the sun contradicts that is evidence against the Earth orbiting the sun. It is related to proof by contradiction; if your premises lead you to a contradictory conclusion, then one of your premises is bad. And if one of your premises is something in which you are justified in having extremely high confidence, such as "there is such a thing as murder", it's probably the other premise that needs to be discarded.
I am surprised that you consider this distaste to be evidence that evolution is an incorrect theory of the origin of species
If you have reason to believe that humans have souls, and evolution implies that they don't, that is evidence against evolution. Of course, how good that is as evidence against evolution depends on how good your reason is to believe that humans have souls. In the case of souls, that isn't really very good.
↑ comment by Tem42 · 2015-07-04T04:16:41.921Z · LW(p) · GW(p)
Evidence that killing is wrong is certainly possible, but your statement "I think that killing is wrong" is such weak evidence that it is fair for us to dismiss it. You may provide reasons why we should think killing is wrong, and maybe we will accept your reasons, but so far you have not given us anything worth considering.
I think that you are also equivocating on the word 'imply', suggesting that 'distasteful implications' means something like 'logical implications'.
↑ comment by Eniac · 2014-12-10T05:06:05.985Z · LW(p) · GW(p)
The task you describe, at least the part where no whole brain transplant is involved, can be divided into two parts: 1) extracting the essential information about your mind from your brain, and 2) implanting that same information back into another brain.
Either of these could be achieved in two radically different ways: a) psychologically, i.e. by interview or memoir writing on the extraction side and "brain-washing" on the implanting side, or b) technologically, i.e. by functional MRI, electro-encephalography, etc on the extraction side. It is hard for me to envision a technological implantation method.
Either way, it seems to me that once we understand the mind enough to do any of this, it will turn out the easiest to just do the extraction part and then simulate the mind on a computer, instead of implanting it into a new body. Eliminate the wetware, and gain the benefit of regular backups, copious copies, and Moore's law for increasing effectiveness. Also, this would be ethically much more tractable.
It seems to me this could also be the solution to the unfriendly AI problem. What if the AI are us? Then yielding the world to them would not be so much of a problem, suddenly.
Replies from: mwengler↑ comment by mwengler · 2014-12-11T16:27:16.676Z · LW(p) · GW(p)
psychologically, i.e. by interview or memoir writing on the extraction side and "brain-washing" on the implanting side,
I would expect recreating a mind from interviews and memoirs to be about as accurate as building a car based on interviews and memoirs written by someone who had driven cars. which is to say, the part of our mind that talks and writes is not noted for its brilliant and detailed insight into how the vast majority of the mind works.
Replies from: Eniac↑ comment by Eniac · 2014-12-13T19:05:41.680Z · LW(p) · GW(p)
Good point.
I suppose it boils down to what you include when you say "mind". I think the part of our mind that talks and writes is not very different from the part that thinks. So, if you narrowly, but reasonably, define the "mind" as only the conscious, thinking part of our personality, it might not be so farfetched to think a reasonable reconstruction of it from writings is possible.
Thought and language are closely related. Ask yourself: How many of my thoughts could I put into language, given a good effort? My gut feeling is "most of them", but I could be wrong. The same goes for memories. If a memory can not be expressed, can it even be called a memory?
comment by timujin · 2014-12-10T07:46:04.511Z · LW(p) · GW(p)
In dietary and health articles they often speak about "processed food". What exactly is processed food and what is unprocessed food?
Replies from: Lumifer, polymathwannabe↑ comment by Lumifer · 2014-12-10T16:05:21.737Z · LW(p) · GW(p)
Definitions will vary depending on the purity obsession of the speaker :-) but as a rough guide, most things in cans, jars, boxes, bottles, and cartons will be processed. Things that are, more or less, just raw plants and animals (or parts of them) will be unprocessed.
There are boundary cases about which people argue -- e.g. is pasteurized milk a processed food? -- but for most things in a food store it's pretty clear what's what.
Replies from: timujin↑ comment by polymathwannabe · 2014-12-10T14:14:57.569Z · LW(p) · GW(p)
Anything that you could have picked from the plant yourself (a pear, a carrot, a berry) AND has not been sprinkled with conservants/pesticides/shiny gloss is unprocessed. If it comes in a package and looks nothing like what nature gives (noodles, cookies, jell-o), it's been processed.
Raw milk also counts as unprocessed, but in the 21st century there's no excuse to be drinking raw milk.
Replies from: Lumifer, timujin↑ comment by Lumifer · 2014-12-10T16:05:55.672Z · LW(p) · GW(p)
in the 21st century there's no excuse to be drinking raw milk
That's debatable -- some people believe raw milk to be very beneficial.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-12-10T16:21:00.115Z · LW(p) · GW(p)
Absolutely not worth the risk.
Replies from: AlexSchell, Lumifer↑ comment by AlexSchell · 2014-12-11T02:42:20.207Z · LW(p) · GW(p)
Do you have any sources that quantify the risk?
↑ comment by Lumifer · 2014-12-10T16:30:28.865Z · LW(p) · GW(p)
Oh, I'm sure the government wants you to believe raw milk is the devil :-)
In reality I think it depends, in particular on how good your immune system is. If you're immunocompromised, it's probably wise to avoid raw milk (as well as, say, raw lettuce in salads). On the other hand, if your immune system is capable, I've seen no data that raw milk presents an unacceptable risk -- of course how much risk is unacceptable varies by person.
Replies from: Tem42↑ comment by Tem42 · 2015-07-04T03:48:40.223Z · LW(p) · GW(p)
More relevant may be your supply chain. If you have given your cow all required shots and drink the milk within a day -- and without mixing it with the milk of dozens of other cows -- you are going to be a lot better off than if you stop off at a random roadside stand and buy a gallon of raw milk.
↑ comment by timujin · 2014-12-10T14:21:11.596Z · LW(p) · GW(p)
So, it doesn't make sense to talk about processed meats, if you can't pick them from plants?
If I roast my carrot, does it become processed?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-12-10T14:37:17.700Z · LW(p) · GW(p)
I'm assuming you value your health and thus don't eat any raw meat, so all of it is going to be processed---if only at your own kitchen.
By the same standard, a roasted carrot is, technically speaking, "processed." However, what food geeks usually think of when they say "processed" involves a massive industrial plant where your food is filled with additives to compensate for all the vitamins it loses after being crushed and dehydrated. Too often it ends up with an inhuman amount of salt and/or sugar added to it, too.
comment by timujin · 2014-12-09T09:15:30.182Z · LW(p) · GW(p)
I have a constant impression that everyone around me is more competent than me at everything. Does it actually mean that I am, or is there some sort of strong psychological effect that can create that impression, even if it is not actually true? If there is, is it a problem you should see your therapist about?
Replies from: Toggle, Viliam_Bur, None, NancyLebovitz, IlyaShpitser, Elo, elharo, EphemeralNight, mwengler, LizzardWizzard, Lumifer, MathiasZaman↑ comment by Toggle · 2014-12-09T18:37:29.950Z · LW(p) · GW(p)
Reminds me of something Scott said once:
Replies from: Gondolinian, Gunnar_ZarnckeAnd when I tried to analyzed my certainty that – even despite the whole multiple intelligences thing – I couldn’t possibly be as good as them, it boiled down to something like this: they were talented at hard things, but I was only talented at easy things.
It took me about ten years to figure out the flaw in this argument, by the way.
↑ comment by Gondolinian · 2014-12-12T01:28:23.746Z · LW(p) · GW(p)
See also: The Illusion of Winning by Scott Adams (h/t Kaj_Sotala)
Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.
But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.
I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.
It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.
↑ comment by Gunnar_Zarncke · 2014-12-09T21:34:42.521Z · LW(p) · GW(p)
This reminds me of my criteria for learning: "You have understood something when it appears to be easy." The mathematicians call this state 'trivial'. It has become easy because you trained the topic until the key aspects became part of your unconscious competence. Then it appears to yourself as easy - because you no longer need to think about it.
↑ comment by Viliam_Bur · 2014-12-09T10:50:04.163Z · LW(p) · GW(p)
Replies from: timujinDespite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.
Psychological research done in the early 1980s estimated that two out of five successful people consider themselves frauds and other studies have found that 70 percent of all people feel like impostors at one time or another. It is not considered a psychological disorder, and is not among the conditions described in the Diagnostic and Statistical Manual of Mental Disorders.
↑ comment by timujin · 2014-12-09T11:30:42.705Z · LW(p) · GW(p)
Err, that's not it. I am no more successful than them. Or, at least, I kinda feel that everyone else is more successful than me as well.
Replies from: Elo↑ comment by Elo · 2014-12-25T01:11:48.135Z · LW(p) · GW(p)
Consider that maybe you might be wrong about the imposter syndrome. As a person without it - its hard to know how you think/feel and how you concluded that you couldn't have it. But maybe its worth asking - How would someone convince you to change your mind on this topic?
Replies from: timujin↑ comment by timujin · 2014-12-25T17:40:06.369Z · LW(p) · GW(p)
By entering some important situation where my and his comparative advantage in some sort of competence comes into play, and losing.
Replies from: Elo↑ comment by Elo · 2014-12-28T08:57:21.999Z · LW(p) · GW(p)
what if you developed a few bad heuristics about how other successful people were not inherently more successful but just got lucky (or some other external granting of success) as they went along; whereas your hard-earned successes were due to successful personal skills... Hard earned, personally achieved success.
its probably possible to see a therapist about it; but I would suggest you can work your own way around it (consider it a challenge that can be overcome with the correct growth mindset)
↑ comment by [deleted] · 2014-12-09T18:40:34.057Z · LW(p) · GW(p)
I think people are quick to challenge this type of impression because it pattern matches to known cognitive distortions involved in things like depression, or known insecurities in certain competitive situations.
For example, consider that most everyone will structure their lives such that their weaknesses are downplayed and their positive features are more prominent. This can happen either by choice of activity (e.g. the stereotypical geek avoids social games) or by more overt communication filtering (e.g. most people don't talk about their anger problems). Accordingly, it's never hard to find information that confirms your own relative incompetence, if there's some emotional tendency to look for it.
Aside from that, a great question is "to what ends am I making this comparison?" I find it unlikely that you have a purely academic interest in the question of your relative competence.
First, it can often be useful to know your relative competence in a specific competitive domain. But even here, this information is only one part of your decision process: You may be okay with e.g. choosing a lower expected rank in one career over a higher rank in another because you enjoy the work more, or find it more compatible with your values, or because it pays betters, or leaves more time for you family, or you're risk averse, or it's more altruistic, etc. But knowing your likely rank along some dimension will tell you a bit about the likely pay-offs of competing along that dimension.
But what is the use of making an across-the-board self-comparison?
Suppose you constructed some general measure of competence across all domains. Suppose you found out you were below average (or even above average). Then what? It seems you're in still in the same situation as before: You still must choose how to spend your time. The general self-comparison measure is nothing more than the aggregate of your expected relative ranks on specific sub-domains, which are more relevant to any specific choice. And as I said above, your expected rank in some area is far from the only bit of information you care about.
As an aside, a positive use for a self-comparison is to provide a role-model. If you find yourself unfavorably compared to almost everyone, consider yourself lucky that you have so many role-models to choose from! Since you are probably like other people in most respects, you can expect to find low-hanging fruit in many areas where you have poor relative performance.
But if you find (as many people will) that you've hit the point of diminishing returns regarding the time you spend comparing yourself to others, perhaps you can simply recognize this and realize that it's neither cowardly nor avoidant to spend your mental energy elsewhere.
↑ comment by NancyLebovitz · 2014-12-09T10:34:29.136Z · LW(p) · GW(p)
Possibly parallel-- I've had a feeling for a long time that something bad was about to happen. Relatively recently, I've come to believe that this isn't necessarily an accurate intuition about the world, it's muscle tightness in my abdomen. It's probably part of a larger pattern, since just letting go in the area where I feel it doesn't make much difference.
I believe that patterns of muscle tension and emotions are related and tend to maintain each other.
It's extremely unlikely that everyone is more competent than you at everything. If nothing else, your writing is better than that of a high proportion of people on the internet. Also, a lot of people have painful mental habits and have no idea that they have a problem.
More generally, you could explore the idea of everyone being more competent than you at everything. Is there evidence for this? Evidence against it? Is it likely that you're at the bottom of ability at everything?
This sounds to me like something worth taking to a therapist, bearing in mind that you may have to try more than one therapist to find one that's a good fit.
I believe there's strong psychological effect which can create that impression-- growing up around people who expect you to be incompetent. Now that I think about it, there may be genetic vulnerability involved, too.
Possibly worth exploring: free monthly Feldenkrais exercise-- this are patterns of gentle movement which produce deep relaxation and easier movement. The reason I think you can get some evidence about your situation by trying Feldenkrais is that, if you find your belief about other people being more competent at everything goes away, even briefly, than you have some evidence that the belief is habitual.
Replies from: mwengler, timujin↑ comment by mwengler · 2014-12-11T15:03:39.745Z · LW(p) · GW(p)
I've had a feeling for a long time that something bad was about to happen.
Nancy, I believe you are describing anxiety. That you are anxious, that if you went to a psychologist for therapy and you were covered by insurance that they would list your diagnosis on the reimbursement form as "generalized anxiety disorder."
I say this not as a psychologist but as someone who was anxious much of his life. For me it was worth doing regular talking therapy and (it seems to me) hacking my anxiety levels slowly downward through directed introspection. I am still more timid than I would like in situations where, for example, I might be very direct telling a woman (of the appropriate sex) I love her, or putting my own ideas forward forcefully at work. But all of these things I do better now than I did in the past, and I don't consider my self-adjustment to be finished yet.
Anyway, If you haven't named what is happening to you as "anxiety," it might be helpful to consider that some of what has been learned about anxiety over time might be interesting to you, that people who are discussing anxiety may often be discussing something relevant to you.
↑ comment by timujin · 2014-12-09T11:34:27.485Z · LW(p) · GW(p)
If nothing else, your writing is better than that of a high proportion of people on the internet.
Do you know me?
More generally, you could explore the idea of everyone being more competent than you at everything. Is there evidence for this? Evidence against it? Is it likely that you're at the bottom of ability at everything?
I find a lot of evidence for it, but I am not sure I am not being selective. For example, I am the only one in my peer group that never did any extra-curricular activities at school. While everyone had something like sports or hobbies, I seemed to only study at school an waste all my other time surfing the internet and playing the same video games over and over.
Replies from: ChristianKl, NancyLebovitz, MathiasZaman, NancyLebovitz↑ comment by ChristianKl · 2014-12-09T12:41:31.342Z · LW(p) · GW(p)
The idea that playing an instrument is a hobby while playing a video game isn't is completely cultural. It says something about values but little about competence.
Replies from: jaime2000↑ comment by jaime2000 · 2014-12-12T17:12:32.666Z · LW(p) · GW(p)
One important difference is that video games are optimized to be fun while musical instruments aren't. Therefore, playing an instrument can signal discipline in a way that playing a game can't.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-12T18:43:44.372Z · LW(p) · GW(p)
I'm not sure that's true. There's selection pressure on musical instruments to make them fun to use. Most of the corresponding training also mostly isn't optimised for learning but for fun.
Replies from: alienist↑ comment by alienist · 2014-12-13T04:54:22.331Z · LW(p) · GW(p)
There's also selection pressure on instruments to make them pleasant to listen to. There's no corresponding constraint on video games.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-13T13:52:11.374Z · LW(p) · GW(p)
There's no corresponding constraint on video games.
In an age of eSports I'm not sure that's true. Quite a lot of games are not balanced to make them fun for the average player but balanced for high level tournament play.
↑ comment by NancyLebovitz · 2014-12-09T16:04:01.539Z · LW(p) · GW(p)
Having a background belief that you're worse than everyone at everything probably lowered your initiative.
↑ comment by MathiasZaman · 2014-12-09T12:01:28.538Z · LW(p) · GW(p)
I seemed to only study at school an waste all my other time surfing the internet and playing the same video games over and over.
Obvious question: Are you better at those games than other people? (On average, don't compare yourself to the elite.)
How easy did studying come to you?
Replies from: timujin↑ comment by timujin · 2014-12-09T19:10:13.848Z · LW(p) · GW(p)
At THOSE games? Yes. I can complete about half of American McGee's Alice blindfolded. Other games? General gaming? No. Or, okay, I am better than non-gamers, but my kinda-gamer peers are crub-stomping me at multiplayer in every game.
Studying - very easy. Now, when I am a university student - quite hard.
Replies from: MathiasZaman↑ comment by MathiasZaman · 2014-12-10T13:19:53.172Z · LW(p) · GW(p)
Studying - very easy. Now, when I am a university student - quite hard.
Seems like you fell prey to the classic scenario of "being intelligent enough to breeze through high school and all I ended up with is a crappy work ethic."
University is as good of a place as any to fix this problem. First of all, I encourage you to do all the things people tell you you should do, but most people don't: Read up before classes, review after classes, read the extra material, ask your professors questions or help, schedule periodic review sessions of the stuff you're supposed to know... You'll regret not doing those things when you get your degree but don't feel very competent about your knowledge. Try to make a habit out of this and it'll get easier in other aspects of your life.
And try new things. This is probably a cliché in the LW-sphere by now, but really try a lot of new things.
Replies from: timujin↑ comment by timujin · 2014-12-10T13:55:59.335Z · LW(p) · GW(p)
Thanks. Still, should I take it as "yes, you are less competent than people around you"?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-12-10T14:29:53.674Z · LW(p) · GW(p)
Maybe just less disciplined than you need to be. "Less competent" is too confusingly relative to mean anything solid.
Replies from: timujin↑ comment by timujin · 2014-12-10T14:37:49.085Z · LW(p) · GW(p)
Well, here's a confusing part. I didn't tell the whole truth in parent post, there are actually two areas that I am probably more competent than peers, in which others openly envy me instead of the other way around. One is the ability to speak English (a foreign language, most my peers wouldn't be able to ask this question here), another is discipline. Everyone actually envies me for almost never procrastinating, never forgetting anything, etc. Are we talking about different disciplines here?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-12-10T14:45:58.824Z · LW(p) · GW(p)
If you already have discipline, what exactly is the difficulty you're finding to study now as compared to previous years?
Replies from: timujin↑ comment by timujin · 2014-12-10T14:57:40.416Z · LW(p) · GW(p)
Sometimes, I just have trouble understanding the subject areas. I am going to take MathiasZaman's advice: I always used my discipline to complete in time and with quality what needs to be completed, but not into anything extra. Mostly, though, it is (social) anxiety - I can't approach a professor with anything unless I have a pack of companions backing me up, or can't start a project unless a friend confirms that I correctly understand what it is that has to be done. And my companions have awful discipline, worst of anyone I ever worked with (which is not many). So I end up, for example, preparing all assignments in time, but hand them in only long after the time is due, when a friend has prepared them. I am working on that problem, and it becomes less severe as the time goes.
Replies from: polymathwannabe, Viliam_Bur, ChristianKl↑ comment by polymathwannabe · 2014-12-10T15:04:21.203Z · LW(p) · GW(p)
I agree; group assignments are the worst. Is there any way you can get the university to let you take unique tests for the themes you already master?
Replies from: timujin↑ comment by timujin · 2014-12-10T15:10:48.810Z · LW(p) · GW(p)
First of all: I don't agree that group assignments are bad. Those problems are my problems, and most complex tasks in real life really benefit from, or require, collaboration. I think that universities should have more group assignments and projects, even if it would mean I'll drop out.
Second, I wasn't talking about group assignments in my post. I was talking about being too anxious to work on your own personal assignment, unless a friend has already done it and can provide confirmation.
↑ comment by Viliam_Bur · 2014-12-11T11:12:47.450Z · LW(p) · GW(p)
So it seems like you can solve the problems... but then you are somehow frozen by fear that maybe your solution is not correct. Until someone else confirms that it is correct, and then you are able to continue. Solving the problem is not a problem; giving it to the teacher is.
On the intellectual level, you should update the prior probability that your solutions are correct.
On the emotional level... what exactly is this horrible outcome your imagination shows you if you would give the professor a wrong solution?
It is probably something that feels stupid if you try to explain it. (Maybe you imagine the professor screaming at you loudly, and the whole university laughing at you. It's not realistic, but it may feel so.) But that's exactly the point. On some level, something stupid happens in your mind, because otherwise you wouldn't have this irrational problem. It doesn't make sense, but it's there in your head, influencing your emotions and actions. So the proper way is to describe your silent horrible vision explicitly, as specifically as you can (bring it from the darkness to light), until your own mind finally notices that it really was stupid.
Replies from: timujin↑ comment by timujin · 2014-12-12T09:13:27.012Z · LW(p) · GW(p)
I have no trouble imagining all the horrible outcomes, because I did get into trouble several times in similar scenarios, where getting confirmation from a friend would have saved me. For example, a couple of hours after giving my work to a teacher, I remembered that my friend wasn't there, even though he was ready. I inquired him about it, and it then turned out that I gave it to the wrong teacher, and getting all my hand-crafted drawings back ended up being a very time and effort consuming task.
↑ comment by ChristianKl · 2014-12-11T00:41:01.566Z · LW(p) · GW(p)
Reading that it sounds like your core issue is around low self confidence.
Taking an IQ test might help to dispell the idea that you are below average. You might be under the LW IQ average IQ of 140 but you are probably well above 100 which is the average in society.
Replies from: timujin↑ comment by timujin · 2014-12-11T09:42:29.916Z · LW(p) · GW(p)
I can guess that my IQ has three digits. It's just that it doesn't enable me to do things better than others. Except solving iq tests, I guess.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-11T14:19:33.821Z · LW(p) · GW(p)
It seems that you have a decent IQ. Additionally you seem to be conscious and can avoid procrastination which is a very, very valuable characteristic.
On the other hand you have issues with self esteem. As far as I understand IQ testing gets used by real psychologists in cases like this.
Taking David Burns CBT book, "The Feeling Good Handbook" and doing the exercises every day for 15 minutes would likely do a lot for you, especially if you can get yourself to do the exercises regularly.
I also support Nancy's suggestion of Feldenkrais.
Replies from: timujin↑ comment by timujin · 2014-12-12T09:17:25.529Z · LW(p) · GW(p)
Another stupid question to boot: will all this make me more content with my current situation? While not being a pleasant feeling, my discontent with my competence does serve as a motivator to actually study. I wouldn't have asked this question here and wouldn't receive all the advice if I were less competent than everyone else and okay with it.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-12-12T22:15:04.344Z · LW(p) · GW(p)
That's a really interesting question, and I don't have an answer to it. Do you have any ideas about how your life might be different in positive ways if you didn't think you were less competent than everyone about everything? Is there anything you'd like to do just because it's important to you?
Replies from: timujin↑ comment by timujin · 2014-12-13T16:31:48.293Z · LW(p) · GW(p)
Do you have any ideas about how your life might be different in positive ways if you didn't think you were less competent than everyone about everything?
Not anything specific.
Is there anything you'd like to do just because it's important to you?
I have goals and values beyond being content or happy, but they are more than a couple of inferential steps away from my day-to-day routine, and I don't have that inner fire thingy that would bridge the gap. So, more often than not, they are not the main component of my actual motivation. Also, I am afraid of possibility of having my values changed.
↑ comment by NancyLebovitz · 2014-12-09T11:45:17.486Z · LW(p) · GW(p)
I don't think I know you, but I'm not that great at remembering people. I made the claim about your writing because I've spent a lot of time online.
I'm sure you're being selective about the people you're comparing yourself to.
↑ comment by IlyaShpitser · 2014-12-09T13:47:07.726Z · LW(p) · GW(p)
There are two separate issues: morale management and being calibrated about your own abilities.
I think the best way to be well-calibrated is to approximate pagerank -- to get a sense of your competence, don't ask yourself, average the extracted opinion of others that are considered competent and have no incentives to mislead you (this last bit is tricky, also the extracting process may have to be slightly indirect).
Morale is hard, and person specific. My experience is that in long term projects/goals, morale becomes a serious problem long before the situation actually becomes bad. I think having "wolverine morale" ("You know what Mr. Grizzly? You look like a wuss, I can totally take you!") is a huge chunk of success, bigger than raw ability.
Replies from: Lumifer↑ comment by elharo · 2014-12-09T12:26:23.331Z · LW(p) · GW(p)
Possible, but unlikely. We're all just winging it and as others have pointed out, impostor syndrome is a thing.
↑ comment by EphemeralNight · 2014-12-12T03:26:22.354Z · LW(p) · GW(p)
I sometimes have a similar experience, and when I do, it is almost always simply an effect of my own standards of competence being higher than those around me.
Imagine, some sort of problem arises in the presence of a small group. The members of that group look at each other, and whoever signals the most confidence gets first crack at the problem. But this more-confident person then does not reveal any knowledge or skill that the others do not possess, because said confidence was entirely do to higher willingness to potentially make the problem worse through trial and error.
So, in this scenario, feeling less competent does not mean you are less competent; it means you are more risk-adverse. Do you have a generalized paralyzing fear of making the problem worse? If so, welcome to the club. If not, nevermind.
↑ comment by mwengler · 2014-12-11T15:12:12.709Z · LW(p) · GW(p)
I personally am a fan of talking therapy. If you are thinking something is worth asking a therapist about, it is worth asking a therapist about. But beyond the generalities, thinking you are not good enough is absolutely right in the targets of the kinds of things it can be helpful to discuss with a therapist.
Consider the propositions: 1) everyone is more competent than you at everything and 2) you can carry on a coherent conversation on lesswrong I am pretty sure that these are mutually exclusive propositions. I'm pretty sure just from reading some of your comments that you are more competent than plenty of other people at a reasonable range of intellectual pursuits.
Anything you can talk to a therapist about you can talk to your friends about. Do they think you are less competent than everybody else? They might point out to you in a discussion some fairly obvious evidence for or against this proposition that you are overlooking.
Replies from: timujin↑ comment by timujin · 2014-12-12T14:38:30.222Z · LW(p) · GW(p)
I asked my friends around. Most were unable to point out a single thing I am good at, except speaking English very well for a foreign language, and having a good willpower. One said "hmmm, maybe math?" (as it turned out, he was fast-talked by the math babble that was auraing around me for some time after having read Godel, Escher, Bach), and several pointed out that I am handsome (while a nice perk, I don't what that to be my defining proficiency).
Replies from: mwengler↑ comment by mwengler · 2014-12-12T14:52:28.010Z · LW(p) · GW(p)
Originally you expressed concern that all other people were better than you at all the things you might do.
But here you find out from your friends that for each thing you do there are other people around you who do it better.
In a world with 6 billion people, essentially every one of us can find people who are better at what we are good at than we are. So join the club. What works is to take some pleasure in doing things.
Only you can improve your understanding of the world, for instance. No one in the world is better at increasing your understanding of the world than you are. I read comments here and post "answers" here to increase my understanding of the world. It doesn't matter that other people here are better at answering these questions, or that other people here have a better understanding of the world than I do. I want to increase my understanding of the world and I am the only person in the world who can do that.
I also wish to understand taking pleasure and joy from the world and work to increase my pleasure and joy in the world. No one can do that for me better than I can. You might take more joy than me in kissing that girl over there. Still, I will kiss her if I can because having you kiss her gives me much less joy and pleasure than kissing her myself, even if I am getting less joy from kissing here than you would get for yourself if you kissed her .
The concern you express to only participate in things where you are better than everybody else is just a result of your evolution as a human being. The genes that make you think being better than others around you have, in the past, caused your ancestors to find effective and capable mates, able to keep their children alive and able to produce children who would find effective and capable mates. But your genes are just your genes they are not the "truth of the world." You can make the choice to do things because you want the experience of doing them, and you will find you are better than anybody else in the world by far at giving yourself experiences.
↑ comment by LizzardWizzard · 2014-12-09T10:30:45.357Z · LW(p) · GW(p)
I suppose that the problem emerged only because you communicate only with people of your sort and level of awareness, try to go on a trip to some rural village or start conversations with taxists, dishwashers, janitors, cooks, security guards etc.
↑ comment by Lumifer · 2014-12-09T18:53:17.163Z · LW(p) · GW(p)
Is that basically a self-confidence problem?
Replies from: timujin↑ comment by timujin · 2014-12-09T19:01:59.148Z · LW(p) · GW(p)
Is it? I don't know.
Replies from: Lumifer↑ comment by Lumifer · 2014-12-09T19:34:26.425Z · LW(p) · GW(p)
Well, does it impact what you are willing to do or try? Or it's just an abstract "I wish I were as cool" feeling?
If you imagine yourself lacking that perception (e.g. imagine everyone's IQ -- except yours -- dropping by 20 points), would the things you do in life change?
Replies from: timujin↑ comment by timujin · 2014-12-09T20:37:55.389Z · LW(p) · GW(p)
Guesses here. I would be taking up more risks in areas where success depends on competition. I would become less conforming, more arrogant and cynical. I would care less about producing good art, and good things in general. I would try less to improve my social skills, empathy and networking, and focus more on self-sufficiency. I wouldn't have asked this question here, on LW.
↑ comment by MathiasZaman · 2014-12-09T12:00:18.868Z · LW(p) · GW(p)
I frequently feel similar and I haven't found a good way to deal with those feelings, but it's implausible that everyone around you is more competent at everything. Some things to take into account:
- Who are you comparing yourself to? Peers? Everyone you meet? Successful people?
- What traits are you comparing? It's unlikely that someone who is, for example, better at math than you are is also superior in every other area.
- Maybe you haven't found your advantage or a way to exploit this.
- Maybe you haven't spend enough time on one thing to get really good at it.
Long shot: Do you think you might have ADHD?.pdf) (pdf warning) Alternatively, go over the diagnostic criteria
Replies from: gjm, timujin↑ comment by gjm · 2014-12-09T12:14:55.533Z · LW(p) · GW(p)
Your link is broken because it has parentheses in the URL. Escape them with backslashes to unbreak it.
Replies from: MathiasZaman↑ comment by MathiasZaman · 2014-12-09T12:18:55.003Z · LW(p) · GW(p)
Thank you very much.
Replies from: gjm↑ comment by timujin · 2014-12-12T09:05:16.043Z · LW(p) · GW(p)
Who are you comparing yourself to? Peers? Everyone you meet? Successful people?
Peers.
What traits are you comparing? It's unlikely that someone who is, for example, better at math than you are is also superior in every other area.
It being unlikely and still seeming to happen is the reason I asked this question.
Maybe you haven't found your advantage or a way to exploit this.
Maybe you haven't spend enough time on one thing to get really good at it.
Maybe. And everyone else did, thus denying me of competitive advantage?
comment by Gram_Stone · 2014-12-30T05:27:09.082Z · LW(p) · GW(p)
Is it a LessWrongian faux pas to comment only to agree with someone? Here's the context:
That's the kind of person that goes on to join LW and tell you. There are also people who read a sequence post or two because they followed a link from somewhere, weren't shocked at all, maybe learned something, and left. In fact I'd expect they're the vast majority.
I was going to say that I agree and that I had not considered my observation as an effect of survivorship bias.
I guess I thought it might be useful to explicitly relate what he said to a bias. Maybe that's just stating the obvious here? Maybe I should do it anyway because it might help someone?
I'd also like to know about this in less specific contexts.
comment by Gram_Stone · 2014-12-30T05:21:20.480Z · LW(p) · GW(p)
What prerequisite knowledge is necessary to read and understand Nick Bostrom's Superintelligence?
comment by Gondolinian · 2014-12-17T15:42:57.153Z · LW(p) · GW(p)
Mostly just out of curiosity:
What happens karma-wise when you submit a post to Discussion, it gets some up/downvotes, you resubmit it to Main, and it gets up/downvotes there? Does the post's score transfer, or does it start from 0?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-12-19T20:59:34.079Z · LW(p) · GW(p)
The post's score transfers, but I think that the votes that were applied when it was in Discussion don't get the x10 karma multiplier that posts in Main otherwise do.
Replies from: Gondolinian↑ comment by Gondolinian · 2014-12-19T21:05:28.736Z · LW(p) · GW(p)
Thanks!
comment by ilzolende · 2014-12-14T20:35:09.099Z · LW(p) · GW(p)
How do I improve my ability to simulate/guess other people's internal states and future behaviors? I can, just barely, read emotions, but I make the average human look like a telepath.
Replies from: hyporational↑ comment by hyporational · 2014-12-15T01:51:04.159Z · LW(p) · GW(p)
It's trial and error mostly, paying attention to other people doing well or making mistakes, getting honest feedback from a skilled and trusted friend. Learning social skills is like learning to ride a bike, reading about it doesn't give you much of an advantage.
The younger you are the less it costs to make mistakes. I think a social job is a good way to learn because customers are way less forgiving than other people you randomly meet. You could volunteer for some social tasks too.
If your native hardware is somehow socially limited then you might benefit from reading a little bit more and you might have to develop workarounds to use what you've got to read people. It's difficult to learn from mistakes if you don't know you're making them.
One thing I've learned about the average human looking like a telepath is that most people are way too certain about their particular assumption when there are actually multiple possible ways to understand a situation. People generally aren't as great at reading each other as they think that are.
Replies from: ilzolende↑ comment by ilzolende · 2014-12-21T05:52:14.107Z · LW(p) · GW(p)
My native hardware is definitely limited - I'm autistic.
The standard quick-and-dirty method of predicting others seems to be "model them as slightly modified versions of you", but when other people's minds are more similar to each other than they are to you, the method works far better for them than it does for you.
My realtime modeling isn't that much worse than other people's, but other people can do a lot more with a couple of minutes and no distractions than I can.
Thanks a bunch for the suggestions!
Replies from: hyporational↑ comment by hyporational · 2014-12-23T17:08:27.817Z · LW(p) · GW(p)
The standard quick-and-dirty method of predicting others seems to be "model them as slightly modified versions of you"
It certainly doesn't feel that way to me, but I might have inherited some autistic characteristics since there are a couple of autistic people in my extended family. Now that I've worked with people more, it's more like I have several basic models of people like "rational", "emotional", "aggressive", "submissive", "assertive", "polite", "stupid", "smart", and then modify those first impressions according to additional information.
I definitely try not to model other people based on my own preferences since they're pretty unusual, and I hate it when other people try to model me based on their own preferences especially if they're emotional and extroverted. I find that kind of empathy very limited, and these days I think I can model a wider variety of people than many natural extroverts can, in the limited types of situations where I need to.
Replies from: ilzolende↑ comment by ilzolende · 2014-12-25T07:40:32.115Z · LW(p) · GW(p)
Thanks! Your personality archetypes/stereotypes sound like a quick-and-dirty modeling system that I can actually use, but one that I shouldn't explain to the people who know me by my true name.
That probably explains why I hadn't heard about it already: if it were less offensive-sounding, then someone would have told me about it. Instead, we get the really-nice-sounding but not very practical suggestions about putting yourself in other peoples' shoes, which is better for basic* morality than it is for prediction.
*By "basic", I mean "stuff all currently used ethical systems would agree on", like 'don't hit someone in order to acquire their toys.'
comment by Capla · 2014-12-12T19:11:21.476Z · LW(p) · GW(p)
Is "how do I get better at sex?" a solved problem?
Is it just a matter of getting a partner who will given you feedback and practicing?
Replies from: Lumifer↑ comment by Lumifer · 2014-12-12T20:12:14.691Z · LW(p) · GW(p)
I think "how do you get better", mostly yes, but "how do you get to be very very good", mostly no.
Replies from: Capla↑ comment by Capla · 2014-12-12T21:11:02.082Z · LW(p) · GW(p)
Ok. Is there a trick to that one or do you just need to have gotten the lucky genes?
Replies from: Lumifer↑ comment by Lumifer · 2014-12-12T21:23:32.932Z · LW(p) · GW(p)
"No", as in "not a solved problem" implies that no one knows :-)
Whether you need lucky genes is hard to tell. Maybe all you need is lack of unlucky ones :-/
Replies from: Capla↑ comment by Capla · 2014-12-12T21:41:37.759Z · LW(p) · GW(p)
Is it a problem that anyone has put significant effort into? What's the state of the evidence?
Now that I think about it, I'm a little surprised there isn't a subculture of people trying to excel at sex, sort of the way pickup artist try an excel at getting sex.
Is this because there is no technique for for doing sex well? Because most people think there's no technique for for doing sex well? Because sex is good enough already? Because sex is actually more about status than pleasure? Because such a subculture exits and I'm ignorant of it?
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2014-12-13T18:49:07.659Z · LW(p) · GW(p)
Because sex is good enough already?
Data suggest that a fair number of woman don't get orgasms during sex but the literature suggest that they could given the proper environment. Squirting in women seems to happen seldom enough that the UK bans it in their porn for being abnormal. But of course sex is about more than just orgasm length and intensity ;)
Because such a subculture exits and I'm ignorant of it?
Yes. In general one of the think that distinguish the pickup artist community is that it's full of people who rather sit in front of their computer to talk about techniques than interact face to face. That means you find a lot of information about it on the internet. Many of the people who are very kinesthetic don't spend much time on the net.
But that doesn't mean there no information available on the internet.
Getting ideas about how sex is supposed to work from porn is very bad. Porn is created to please the viewer, not the actors. Porn producers have to worry about issues like camera angles. Sensual touch can create feelings without looking good on the camera. Porn often ignores the state of mind of the actors.
Books on the other hand do provide some knowledge, even when they alone aren't enough. Tim Ferriss has in it's "The 4-Hour Body" book two chapters about the subject, including the basic anatomy lesson of how the g-spot works. Apart from that I'm not familiar with English literature on the subject but Tim Ferriss suggests among others http://www.tinynibbles.com/ for further reading.
The community in which I would expect the most knowledge are polyamorous people who speak very openly with each other.
Using our cherished rationality skills we can start to break the skill down into subareas:
1) Everybody is different. Don't assume that every men or woman wants the same thing.
2) Consent: Don't do something that your partner doesn't want you to do to him. When in doubt, ask.
3) Mindset: Inconfidence and feeling pressure to perform can get in the way of being present. Various forms of "sex is bad"-beliefs can reduce enjoyment.
Authentic expression and doing in every moment what feels right, is a good frame. If you need something to occupy your mind, think in terms of investigation. Be curious about effects of your own actions. What happens in your own body? What happens in the body of your partner? How does it feel? Be always open for the present.
If you want to learn to be in that frame, classes in "Movement Science" (in dance studios) or contact improvisation can teach you to access that state of mind. In Berlin where I live that community also overlaps with the poly crowd.
4) Dominance Higher testosterone and the behavior that it produces means better sex.
5) Open Communication Creating a space where desires can be expressed without any fear of judgement is a skill that most people don't have.
6) Fine control over your own body. There are many ways to train those skills.
7) Perceptions of the partner.
↑ comment by Lumifer · 2014-12-12T23:51:56.406Z · LW(p) · GW(p)
I'm a little surprised there isn't a subculture of people trying to excel at sex.
I'm sure there is, but I don't think it would want to be very... public about it. For one thing, I wouldn't be surprised if competent professionals were very good (and very expensive).
Given Christianity's prudishness (thank you, St.Augustine), you may also want to search outside of the Western world -- Asia, including India, sound promising.
But as usual, one of the first questions is what do you want to optimize for. And don't forget that men and women start from quite different positions.
Replies from: Caplacomment by hamnox · 2014-12-18T00:10:03.635Z · LW(p) · GW(p)
Here I be, looking at a decade old Kurzweil book, and I want to know whether the trends he's graphing hold up after in later years. I have no inkling of where on earth one GETs these kinds of factoids, except by some mystical voodoo powers of Research bestowed by Higher Education. It's not just guesstimation... probably.
Bits per Second per Dollar for wireless devices? Smallest DRAM Half Pitches? Rates of adoption for pre-industrial inventions? From whence do all these numbers come and how does one get more recent collections of numbers?
Replies from: knbcomment by Ebthgidr · 2014-12-10T03:04:02.434Z · LW(p) · GW(p)
A question about Lob's theorem: assume not provable(X). Then, by rules of If-then statements, if provable(X) then X is provable But then, by Lob's theorem, provable(X), which is a contradiction. What am I missing here?
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-10T03:35:53.916Z · LW(p) · GW(p)
I'm not sure how you're getting from not provable(X) to provable(provable(X) -> X), and I think you might be mixing meta levels. If you could prove not provable(X), then I think you could prove (provable(X) ->X), which then gives you provable(X). Perhaps the solution is that you can never prove not provable(X)? I'm not sure about this though.
Replies from: Ebthgidr, Kindly↑ comment by Ebthgidr · 2014-12-10T10:37:36.043Z · LW(p) · GW(p)
I forget the formal name for the theorem, but isn't (if X then Y) iff (not-x or Y) provable in PA? Because I was pretty sure that's a fundamental theorem in first order logic. Your solution is the one that looked best, but it still feels wrong. Here's why: Say P is provable. Then not-P is provably false. Then not(provable(not-P)) is provable. Not being able to prove not(provable(x)) means nothing is provable.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-10T11:09:14.681Z · LW(p) · GW(p)
You're right that (if X then Y) is just fancy notation for (not(X) or Y). However, I think you're mixing up levels of where things are being proved. For the purposes of the rest of this comment, I'll use provable(X) to mean that PA or whatever proves X, and not that we can prove X. Now, suppose provable(P). Then provable(not(not(P))) is derivable in PA. You then claim that not(provable(not(P))) follows in PA, that is to say, that provable(not(Q)) -> not(provable(Q)). However, this is precisely the statement that PA is consistent, which is not provable in PA. Therefore, even though we can go on to prove not(provable(not(P))), PA can't, so that last step doesn't work.
Replies from: Ebthgidr, Ebthgidr↑ comment by Ebthgidr · 2014-12-16T12:29:47.457Z · LW(p) · GW(p)
Wait. Not(provable(consistency)) is provable in PA? Then run that through the above.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-16T23:52:21.461Z · LW(p) · GW(p)
Not(provable(consistency)) is provable in PA?
I'm not sure that this is true. I can't find anything that says either way, but there's a section on Godel's second incompleteness theorem in the book "Set theory and the continuum hypothesis" by Paul Cohen that implies that the theorem is not provable in the theory that it applies to.
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-17T03:15:09.399Z · LW(p) · GW(p)
I'll rephrase it this way:
For all C:
Either provable(C) or not(provable(C))
If provable(C), then provable(C)
If not provable(C), then use the above logic to prove provable C.
Therefore all C are provable.
↑ comment by DanielFilan · 2014-12-17T07:05:02.405Z · LW(p) · GW(p)
Which "above logic" are you referring to? If you mean your OP, I don't think that the logic holds, for reasons that I've explained in my replies.
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-17T17:44:27.926Z · LW(p) · GW(p)
Your reasons were that not(provable(c)) isn't provable in PA, right? If so, then I will rebut thusly: the setup in my comment immediately above(I.e. either provable(c) or not provable(c)) gets rid of that.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-18T00:47:12.758Z · LW(p) · GW(p)
I'm not claiming that there is no proposition C such that not(provable(C)), I'm saying that there is no proposition C such that provable(not(provable(C))) (again, where all of these 'provable's are with respect to PA, not our whole ability to prove things). I'm not seeing how you're getting from not(provable(not(provable(C)))) to provable(C), unless you're commuting 'not's and 'provable's, which I don't think you can do for reasons that I've stated in an ancestor to this comment.
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-18T03:30:07.905Z · LW(p) · GW(p)
Well, there is, unless i misunderstand what meta level provable(not(provable(consistency))) is on.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-18T10:54:58.172Z · LW(p) · GW(p)
I think you do misunderstand that, and that the proof of not(provable(consistency(PA))) is not in fact in PA (remember that the "provable()" function refers to provability in PA). Furthermore, regarding your comment before the one that I am responding to now, just because not(provable(C)) isn't provable in PA, doesn't mean that provable(C) is provable in PA: there are lots of statements P such that neither provable(P) nor provable(not(P)), since PA is incomplete (because it's consistent).
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-18T19:59:31.032Z · LW(p) · GW(p)
That doesn't actually answer my original question--I'll try writing out the full proof.
Premises:
P or not-P is true in PA
Also, because of that, if p -> q and not(p)-> q then q--use rules of distribution over and/or
So:
- provable(P) or not(provable(P)) by premise 1
2: If provable(P), provable(P) by: switch if p then p to not p or p, premise 1
3: if not(provable(P)) Then provable( if provable(P) then P): since if p then q=not p or q and not(not(p))=p
4: therefore, if not(provable(P)) then provable(P): 3 and Lob's theorem
5: Therefore Provable(P): By premise 2, line 2, and line 4.
Where's the flaw? Is it between lines 3 and 4?
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-18T21:17:27.305Z · LW(p) · GW(p)
I think step 3 is wrong. Expanding out your logic, you are saying that if not(provable(P)), then (if provable(P) then P), then provable(if provable(P) then P). The second step in this chain is wrong, because there are true facts about PA that we can prove, that PA cannot prove.
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-19T00:40:39.496Z · LW(p) · GW(p)
So the statement (if not(p) then (if p then q)) is not provable in PA? Doesn't it follow immediately from the definition of if-then in PA?
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-19T07:41:26.356Z · LW(p) · GW(p)
(if not(p) then (if p then q)) is provable. What I'm claiming isn't necessarily provable is (if not(p) then provable(if provable(p) then q)), which is a different statement.
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-19T18:12:12.338Z · LW(p) · GW(p)
Oh, that's what I've been failing to get across.
I'm not saying if not(p) then (if provable(p) then q). I'm saying if not provable(p) then (if provable(p) then q)
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-20T06:10:52.655Z · LW(p) · GW(p)
I'm saying if not provable(p) then (if provable(p) then q)
You aren't saying that though. In the post where you numbered your arguments, you said (bolding mine)
if not(provable(P)) then provable(if provable(P) then P)
which is different, because it has an extra 'provable'.
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-20T23:28:03.216Z · LW(p) · GW(p)
So then here's a smaller lemma: for all x and all q:
If(not(x))
Then provable(if x then q): by definition of if-then
So replace x by Provable(P) and q by p.
Where's the flaw?
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-21T23:44:23.813Z · LW(p) · GW(p)
The flaw is that you are correctly noticing that provable(if(not(x) then (if x then q)), and incorrectly concluding if(not(x)) then provable(if x then q). It is true that if(not(x)) then (if x then q), but if(not(x)) is not necessarily provable, so (if x then q) is also not necessarily provable.
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-22T01:22:28.669Z · LW(p) · GW(p)
is x or not x provable? Then use my proof structure again.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-22T06:27:11.498Z · LW(p) · GW(p)
The whole point of this discussion is that I don't think that your proof structure is valid. To be honest, I'm not sure where your confusion lies here. Do you think that all statements that are true in PA are provable in PA? If not, how are you deriving provable(if x then q) from (if x then q)?
In regards to your above comment, just because you have provable(x or not(x)) doesn't mean you have provable(not(x)), which is what you need to deduce provable(if x then q).
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-22T17:54:45.391Z · LW(p) · GW(p)
To answer the below: I'm not saying that provable(X or notX) implies provable (not X). I'm saying...I'll just put it in lemma form(P(x) means provable(x):
If P( if x then Q) AND P(if not x then Q)
Then P(not x or Q) and P(x or Q): by rules of if then
Then P( (X and not X) or Q): by rules of distribution
Then P(Q): Rules of or statements
So my proof structure is as follows: Prove that both Provable(P) and not Provable(P) imply provable(P). Then, by the above lemma, Provable(P). I don't need to prove Provable(not(Provable(P))), that's not required by the lemma. All I need to prove is that the logical operations that lead from Not(provable(P))) to Provable(P)) are truth and provability preserving
Replies from: DanielFilan, DanielFilan↑ comment by DanielFilan · 2014-12-23T08:25:44.847Z · LW(p) · GW(p)
Breaking my no-comment commitment because I think I might know what you were thinking that I didn't realise that you were thinking (won't comment after this though): if you start with (provable(provable(P)) or provable(not(provable(P)))), then you can get your desired result, and indeed, provable(provable(P) or not(provable(P))). However, provable(Q or not(Q)) does not imply provable(Q) or provable(not(Q)), since there are undecideable questions in PA.
Replies from: Ebthgidr↑ comment by Ebthgidr · 2014-12-23T10:36:40.799Z · LW(p) · GW(p)
Ohhh, thanks. That explains it. I feel like there should exist things for which provable(not(p)), but I can't think of any offhand, so that'll do for now.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-23T22:54:31.206Z · LW(p) · GW(p)
↑ comment by DanielFilan · 2014-12-23T02:22:31.040Z · LW(p) · GW(p)
I agree that if you could prove that (if not(provable(P)) then provable(P)), then you could prove provable(P). That being said, I don't think that you can actually prove (if not(provable(P)) then provable(P)). A few times in this thread, I've shown what I think the problem is with your attempted proof - the second half of step 3 does not follow from the first half. You are assuming X, proving Y, then concluding provable(Y), which is false, because X itself might not have been provable. I am really tired of this thread, and will no longer comment.
↑ comment by Kindly · 2014-12-10T05:28:15.761Z · LW(p) · GW(p)
As far as I know, that is actually the solution. If you could prove "not provable(X)" then in particular you have proven that the proof system you're working in is consistent (an inconsistent system proves everything by explosion). But Godel.
comment by JQuinton · 2014-12-08T20:11:57.558Z · LW(p) · GW(p)
Looking for some people to refute this recently hair-brained idea I came up with.
The time period from the advent of the industrial revolution to the so-called digital revolution was about 150 - 200 years. Even though computers were being used around WWII, widespread computer use didn't start to shake things up until 1990 or so. I would imagine that AI would constitute a similar fundamental shift in how we live our lives. So would it be a reasonable extrapolation to think that widespread AI would be about 150 - 200 years after the beginning of the information age?
Replies from: sixes_and_sevens, shminux, NobodyToday↑ comment by sixes_and_sevens · 2014-12-08T20:41:17.367Z · LW(p) · GW(p)
By what principle would such an extrapolation be reasonable?
↑ comment by Shmi (shminux) · 2014-12-08T21:15:28.526Z · LW(p) · GW(p)
If you are doing reference class forecasting, you need at least a few members in your reference class and a few outside of it, together with the reasons why some are in and others out. If you are generalizing from one example, then, well...
↑ comment by NobodyToday · 2014-12-08T21:40:32.305Z · LW(p) · GW(p)
I'm a firstyear AI student, and we are currently in the middle of exploring AI 'history'. Ofcourse I don't know a lot about about AI yet, but the interesting part about learning of the history of AI is that in some sense the climax of AI-research is already behind us. People got very interested in AI after the Dartmouth conference ( http://en.wikipedia.org/wiki/Dartmouth_Conferences ) and were so optimistic that they thought they could make an artificial intelligent system in 20 years. And here we are, still struggling with the seemingly simplest things such as computer vision etc.
The problem is they came across some hard problems which they can't really ignore. One of them is the frame problem. http://www-formal.stanford.edu/leora/fp.pdf One of them is the common sense problem.
Solutions to many of them (I believe) are either 1) huge brute-force power or 2) machine learning. And machine learning is a thing which we can't seem to get very far with. Programming a computer to program itself, I can understand why that must be quite difficult to accomplish. So since the 80s AI researchers have mainly focused on building expert systems: systems which can do a certain task much better than humans. But they lack in many things that are very easy for humans (which is apparently called the Moravec's paradox ).
Anyway, the point Im trying to get across, and Im interested in hearing whether you agree or not, is that AI was/is very overrated. I doubt we can ever make a real artificial intelligent agent, unless we can solve the machine learning problem for real. And I doubt whether that is ever truly possible.
Replies from: Daniel_Burfoot, DanielLC↑ comment by Daniel_Burfoot · 2014-12-08T23:02:55.953Z · LW(p) · GW(p)
And machine learning is a thing which we can't seem to get very far with.
Standard vanilla supervised machine learning (e.g. backprop neural networks and SVMs) is not going anywhere fast, but deep learning is really a new thing under the sun.
Replies from: Punoxysm↑ comment by Punoxysm · 2014-12-10T05:17:31.596Z · LW(p) · GW(p)
but deep learning is really a new thing under the sun.
On the contrary, the idea of making deeper nets is nearly as old as ordinary 2-layer neural nets, successful implementations dates back to the late 90's in the form of convolutional neural nets, and they had another burst of popularity in 2006.
Advances in hardware, data availability, heuristics about architecture and training, and large-scale corporate attention have allowed the current burst of rapid progress.
This is both heartening, because the foundations of its success are deep, and tempering, because the limitations that have held it back before could resurface to some degree.
↑ comment by DanielLC · 2014-12-09T18:18:25.893Z · LW(p) · GW(p)
And I doubt whether that is ever truly possible.
It's possible. We're an example of that. The question is if it's humanly possible.
There's a common idea of an AI being able to make another twice as smart as itself, which could make another twice as smart as itself, etc. causing an exponential increase in intelligence. But it seems just as likely that an AI could only make one half as smart as itself, in which case we'll never even be able to get the first human-level AI.
Replies from: ctintera↑ comment by ctintera · 2014-12-10T11:40:00.537Z · LW(p) · GW(p)
The example you give to prove plausibility is also a counterexample to the argument you make immediately afterwards. We know that less-intelligent or even non-intelligent things can produce greater intelligence because humans evolved, and evolution is not intelligent.
It's more a matter of whether we have enough time to drudge something reasonable out of the problem space. If we were smarter we could search it faster.
Replies from: DanielLC↑ comment by DanielLC · 2014-12-10T19:06:52.003Z · LW(p) · GW(p)
Evolution is an optimization process. It might not be "intelligent" depending on your definition, but it's good enough for this. Of course, that just means that a rather powerful optimization process occurred just by chance. The real problem is, as you said, it's extremely slow. We could probably search it faster, but that doesn't mean that we can search it fast.
comment by Kaura · 2014-12-10T14:54:19.086Z · LW(p) · GW(p)
Assuming for a moment that Everett's interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC - there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):
Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be? Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.
This is obviously not applicable to e.g. humanity as it is, where self-destruction on any level is inconvenient, if at all possible, and generally not a nice thing to do. But would it theoretically make sense for intelligences like this to develop, and maybe even have an overwhelming tendency to develop in the long term? What if this is one of the vast amount of branches where everyone in the observable universe pretty much failed to have a good enough time and a bright enough future and just offed themselves before interstellar travel etc., because a sufficiently advanced civilization sees it's just not a big deal in an Everett multiverse?
(There's probably a lot that I've missed here as I have no deep knowledge regarding the MWI, and my reading history so far only touches on this kind of stuff in general, but yay stupid questions thread.)
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-11T00:01:00.949Z · LW(p) · GW(p)
Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be?
Not really. If you're in a suboptimal branch, but still doing better than if you didn't exist at all, then you aren't making the world better off by self-destructing regardless of whether other branches exist.
Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.
It would not increase the proportion (technically, you want to be talking about measure here, but the distinction isn't important for this particular discussion) of branches where everything is stellar - just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive, which isn't so important. To see this, imagine you have two branches, one where things are going poorly and one where things are going great. The proportion of branches where things are going stellar is 1/2. Now suppose that the being/society/system that is going poorly self-destructs. The proportion of branches where things are going stellar is still 1/2, but now you have a branch where instead of having a being/society/system that is going poorly, you have no being/society/system at all.
Replies from: Kaura↑ comment by Kaura · 2014-12-11T16:53:18.869Z · LW(p) · GW(p)
Thanks! Ah, I'm probably just typical-minding like there's no tomorrow, but I find it inconceivable to place much value on the amount of branches you exist in. The perceived continuation of your consciousness will still go on as long as there are beings with your memories in some branch: in general, it seems to me that if you say you "want to keep living", you mean you want there to be copies of you in some or the possible futures, waking up the next morning doing stuff present-you would have done, recalling what present-you thought yesterday, and so on (in addition you will probably want a low probability for this future to include significant suffering). Likewise, if you say you "want to see humanity flourish indefinitely", you want a future that includes your biological or cultural peers and offspring colonizing space and all that, remembering and cherishing many of the values you once had (sans significant suffering). To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.* Instead, what matters overwhelmingly more is the probability of any given copy living a high quality life.
just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive
Yes, this is obvious of course. What I meant was exactly this, because from the point of view of a set of observers, eliminating the set of observers from a branch <=> rendering the branch irrelevant, pretty much.
which isn't so important.
To me it did feel like this is obviously what's important, and the branches where you don't exist simply don't matter - there's no one there to observe anything after all, or judge the lack of you to be a loss or morally bad (again, not applicable to individual humans).
If I learned today that I have a 1% chance to develop a maybe-terminal, certainly suffering-causing cancer tomorrow, and I could press a button to just eliminate the branches where that happens, I would not have thought I am committing a moral atrocy. I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people - no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations. But this is certainly something I should learn to understand better before anyone gives me a world-destroying cancer cure button.
*Which is one main difference when comparing this to regular old population ethics, I suppose.
Replies from: DanielFilan↑ comment by DanielFilan · 2014-12-12T01:07:35.715Z · LW(p) · GW(p)
To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.
As it happens, you totally can (it's called the Born measure, and it's the same number as what people used to think was the probabilities of different branches occurring), and agents that satisfy sane decision-theoretic criteria weight branches by their Born measure - see this paper for the details.
I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people - no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations.
This is a good place to strengthen intuition, since if you replace "killing myself" with "torturing myself", it's still true that none of your future selves who remain alive/untortured "would ever notice anything, vast amounts of future copies of [yourself] would wake up just like they thought they would the nloext morning, and carry on with their lives and aspirations". If you arrange for yourself to be tortured in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life - but you also wake up and get tortured. Similarly, if you arrange for yourself to be killed in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life - but you also get killed (which is presumably a bad thing even or especially if everybody else also dies).
One way to intuitively see that this way of thinking is going to get you in trouble is to note that your preferences, as stated, aren't continuous as a function of reality. You're saying that universes where (1-x) proportion of branches feature you being dead and x proportion of branches feature you being alive are all equally fine for all x > 0, but that a universe where you are dead with proportion 1 and alive with proportion 0 would be awful (well, you didn't actually say that, but otherwise you would be fine with killing some of your possible future selves in a classical universe). However, there is basically no difference between a universe where (1-epsilon) proportion of branches feature you being dead and epsilon proportion of branches feature you being alive, and a universe where 1 proportion of branches feature you being dead and 0 proportion of branches feature you being alive (since don't forget, MWI looks like a superposition of waves, not a collection of separate universes). This is the sort of thing which is liable to lead to crazy behaviour.
Replies from: ike↑ comment by ike · 2014-12-16T22:11:16.926Z · LW(p) · GW(p)
I'm sorry, but "sort of thing which is liable to lead to crazy behaviour" won't cut it. Could you give an example of crazy behaviour with this preference ordering? I still think this approach (not counting measure as long as some of me exists) feels right and is what I want. I'm not too worried about discontinuity at only x=0 (and if you look at larger multiverses, x probably never equals 0.)
To argue over a specific example: if I set up something that chooses a number randomly with quantum noise, then buys a lottery ticket, then kills me (in my sleep) only if the ticket doesn't win, then I assign positive utility to turning the machine on. (Assuming I don't give a damn about the rest of the world who will have to manage without me.) Can you turn this into either an incoherent preference, or an obviously wrong preference?
(Personally, I've thought about the TDT argument for not doing that; because you don't want everyone else to do it and create worlds in which only 1 person who would do it is left in each, but I'm not convinced that there are a significant number of people who would follow my decision on this. If I ever meet someone like that, I might team up with them to ensure we'd both end up in the same world. I haven't seen any analysis of TDT/anthropics applied to this problem, perhaps because other people care more about the world?)
Replies from: DanielFilan, DanielFilan↑ comment by DanielFilan · 2014-12-17T00:36:51.275Z · LW(p) · GW(p)
Another way to look at it is this: imagine you wake up after the bet, and don't yet know whether you are going to quickly be killed or whether you are about to recieve a large cash prize. It turns out that your subjective credence for which branch you are in is given by the Born measure. Therefore, (assuming that not taking the bet maximises expected utility in the single-world case), you're going to wish that you hadn't taken the bet immediately after taking it, without learning anything new or changing your mind about anything. Thus, your preferences as stated either involve weird time inconsistencies, or care about whether there's a tiny sliver of time between the worlds branching off and being killed. At any rate, in any practical situation, that tiny sliver of time is going to exist, so if you don't want to immediately regret your decision, you should maximise expected utility with respect to the Born measure, and not discount worlds where you die.
↑ comment by DanielFilan · 2014-12-16T23:39:34.413Z · LW(p) · GW(p)
Your preference already feels "obviously wrong" to me, and I'll try to explain why. If we imagine that only one world exists, but we don't know how it will evolve, I wouldn't take the analogue of your lottery ticket example, and I suspect that you wouldn't either. The reason that I wouldn't do this is because I care about the possible future worlds where I would die, despite the fact that I wouldn't exist there (after very long). I'm not sure what other reason there would be to reject this bet in the single-world case. However, you are saying that you don't care about the actual future worlds where you die in the many-worlds case, which seems bizarre and inconsistent with what I imagine your preferences would be in the single-world case. It's possible that I'm wrong about what your preferences would be in the single-world case, but then you're acting according to the Born rule anyway, and whether the MWI is true doesn't enter into it.
(EDIT: that last sentence is wrong, you aren't acting according to the Born rule anyway.)
In regards to my point about discontinuity, it's worth knowing that to know whether x = 0 or x > 0, you need infinitely precise knowledge of the wave function. It strikes me as unreasonable and off-putting that no finite amount of information about the state of the universe can discern between one universe which you think is totally fantastic and another universe which you think is terrible and awful. That being said, I can imagine someone being unpersuaded by this argument. If you are willing to accept discontinuity, then you get a theory where you are still maximising expected utility with respect to the Born rule, but your utilities can be infinite or infinitesimal.
On a slightly different note, I would highly recommend reading the paper which I linked (most of which I think is comprehensible without a huge amount of technical background), which motivates the axioms you need for the Born rule to work, and dismotivates other decision rules.
EDIT: Also, I'm sorry about the "sort of thing which is liable to lead to crazy behaviour" thing, it was a long comment and my computer had already crashed once in the middle of composing it, so I really didn't want to write more.
Replies from: ike↑ comment by ike · 2014-12-17T05:10:06.249Z · LW(p) · GW(p)
I downloaded the paper you linked to and will read it shortly. I'm totally sympathetic to the "didn't want to make a long comment longer" excuse, having felt that way many times myself.
I agree in the single-world case, I wouldn't want to do it. That's not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability. In a multiverse, I still exist with ~1 probability. You can argue that I can't know for sure that I live in a multiverse, which is one of the reasons I'm still alive in your world (the main reason being it's not practical for me right now, and I'm not really confident enough to bother researching and setting something like that up.) However, you also don't know that anything you do is safe, by which I mean things like driving, walking outside, etc. (I'd say those things are far more rational in a multiverse, anyway, but even people who believe in single world still do these things.)
Another reason I don't have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don't feel like that argument is convincing.
I don't think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses. You don't need to know for sure that x>0 (as you can't know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.
If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking,noise generation, and checking is done while I sleep, so I don't have to worry about it. That said, I don't think the question of my subjective expectation of no longer existing is well-defined, because I don't have a subjective experience if I no longer exist. If am cloned, then told one of me is going to be vaporized without any further notice, and it happens fast enough not to have them feel anything, then my subjective expectation is 100% to survive. That's different from the torture case you mentioned above, where I expect to survive, and have subjective experiences. I think we do have some more fundamental disagreement about anthropics, which I don't want to argue over until I hash out my viewpoint more. (Incidentally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/:
"What would happen if the Dust won?" asked the hero. "Would the whole world be destroyed in a single breath?"
Aerhien's brow quirked ever so slightly. "No," she said serenely. Then, because the question was strange enough to demand a longer answer: "The Dust expands slowly, using territory before destroying it; it enslaves people to its service, before slaying them. The Dust is patient in its will to destruction."
The hero flinched, then bowed his head. "I suppose that was too much to hope for; there wasn't really any reason to hope, except hope... it's not required by the logic of the situation, alas..."
I interpreted that as saying that you can only rely on the anthropic principle (and super quantum psychic powers), if you die without pain.)
I'm actually planning to write a post about Big Worlds, anthropics, and some other topics, but I've got other things and am continuously putting it off. Eventually. I'd ideally like to finish some anthropics books and papers, including Bostrom's, first.
Replies from: DanielFilan, DanielFilan↑ comment by DanielFilan · 2014-12-17T07:52:55.444Z · LW(p) · GW(p)
Another, more concise way of putting my troubles with discontinuity: I think that your utility function over universes should be a computable function, and the computable functions are continuous.
Also - what, you have better things to do with your time than read long academic papers about philosophy of physics right now because an internet stranger told you to?!
↑ comment by DanielFilan · 2014-12-17T07:43:04.369Z · LW(p) · GW(p)
In the single-world case, I wouldn't want to do it. That's not because I care about the single world without me per se (as in caring for the people in the world), but because I care about myself who would not exist with ~1 probability.
Here's the thing: you obviously think that you dying is a bad thing. You apparently like living. Even if the probability were 20-80 of you dying, I imagine you still wouldn't take the bet (in the single-world case) if the reward were only a few dollars, even though you would likely survive. This indicates that you care about possible futures where you don't exist - not in the sense that you care about people in those futures, but that you count those futures in your decision algorithm, and weigh them negatively. By analogy, I think you should care about branches where you die - not in the sense that you care about the welfare of the people in them, but that you should take those branches into account in your decision algorithm, and weigh them negatively.
Another reason I don't have a problem with discontinuity is that the whole problem seems only to arise when you have an infinite number of worlds, and I just don't feel like that argument is convincing.
I'm not sure what you can mean by this comment, especially "the whole problem". My arguments against discontinuity still apply even if you only have a superposition of two worlds, one with amplitude sqrt(x) and another with amplitude sqrt(1-x).
I don't think you need infinite knowledge to know whether x=0 or x>0, especially if you give some probability to higher level multiverses.
... I promise that you aren't going to be able to perform a test on a qubit that you can expect to tell you with 100% certainty that , even if you have multiple identical qubits.
You don't need to know for sure that x>0 (as you can't know anyway), but you can have 99.9% confidence that x>0 rather easily, conditional on MWI being true. As I explained, that is enough to take risks.
This wasn't my point. My point was that your preferences make huge value distinctions between universes that are almost identical (and in fact arbitrarily close to identical). Even though your value function is technically a function of the physical state of the universe, it's like it may as well not be, because arbitrary amounts of knowledge about the physical state of the universe still can't distinguish between types of universes which you value very different amounts. This intuitively seems irrational and crazy to me in and of itself, but YMMV.
If I wake up after, in my case that I laid out, that would mean that I won, as I specified I would be killed while asleep. I could even specify that the entire lotto picking, noise generation, and checking is done while I sleep, so I don't have to worry about it.
I find it highly implausible that this should make a difference for your decision algorithm. Imagine that you could extend your life in all branches by a few seconds in which you are totally blissful. I imagine that this would be a pleasant change, and therefore preferable. You can then contemplate what will happen next in your pleasant state, and if my arguments go through, this would mean that your original decision was bad. So, we have a situation where you used to prefer taking the bet to not taking the bet, but when we made the bet sweeter, you know prefer not taking the bet. This seems irrational.
That said, I don't think the question of my subjective expectation of no longer existing is well-defined, because I don't have a subjective experience if I no longer exist.
I think it is actually well-defined? Right now, even if I were told that no multiverse exists, I would be pretty sure that I would continue living, even though I wouldn't be having experiences if I were dead. I think the problem here is that you are confusing my invocation of subjective probabilities (while you're pondering what will happen next in your branch) of what will objectively happen next with a statement about subjective experiences later.
I think we do have some more fundamental disagreement about anthropics, which I don't want to argue over until I hash out my viewpoint more.
I would be interested in reading your viewpoints about anthropics, should you publish them. That being said, given that you don't take the suicide bet in the single-world case, I think that we probably don't.
comment by Interpolate · 2014-12-23T03:07:40.850Z · LW(p) · GW(p)
These aren't so much "stupid" questions but ones which have no clear answer, and I'm curious what people here feel have to say about this.
-Why should (or shouldn't) one aspire to be "good" in the sense of prosocial, altruistic etc.?
-Why should (or shouldn't) one attempt to be as honest as possible in their day to day lives?
I have strong altruistic inclinations because that's how I'm predisposed to be and often because coincides with my values; other people's suffering upsets me and I would prefer to live a world in which people are kind and supportive of each other. I want to be nice, but I don't want to want to be nice; I can't find strong rational reasons to be altruistic.
I'm honest with people I voluntarily interact with, but ambivalent about lying in general. For example, I'm currently on sort of intermittent fasting regimen and if someone I'm not particularly familiar with offers food, I tend to say "I've already ate" rather than giving my real reason for abstaining from. I've seen it argued that lying to others will make you more likely to lie to yourself, but I'm unconvinced this is the case.
comment by knb · 2014-12-18T05:37:59.264Z · LW(p) · GW(p)
I have a vague notion from reading science fiction stories that black holes may be extremely useful for highly advanced (as in, post-singularity/space-faring) civilizations. For example, IIRC, in John C. Wright's Golden Age series, a colony formed near a black hole became fantastically wealthy.
I did some googling, but all I found was that they would be great at cooling computer systems in space. That seems useful, but I was expecting something more dramatic. Am I missing something?
Replies from: alienist, Lumifer↑ comment by alienist · 2014-12-19T05:50:41.321Z · LW(p) · GW(p)
I did some googling, but all I found was that they would be great at cooling computer systems in space.
When you're sufficiently advanced, cooling your systems, technically disposing of entropy, is one of the main limiting constraint on your system. Also if you throw matter into a black hole just right you can get its equivalent (or half its equivalent I forgot which) out in energy.
Edit: thinking about it, it is half the mass.
Replies from: orthonormal↑ comment by orthonormal · 2014-12-26T22:26:50.310Z · LW(p) · GW(p)
Also if you throw matter into a black hole just right you can get its equivalent (or half its equivalent I forgot which) out in energy.
Not in useful energy, if you're thinking of using Hawking radiation; it comes out in very high-entropy form. I was so sad when I realized that the "Hawking reactor" I'd invented in fifth grade would violate the Second Law of Thermodynamics.
Replies from: alienist, JoshuaZ↑ comment by alienist · 2014-12-27T01:49:34.784Z · LW(p) · GW(p)
I wasn't talking about Hawkings radiation. If I throw matter in a black hole just right, I can get half the mass to come out in low-entropy photons. That's why the brightest objects in the universe are black holes that are currently eating something.
Replies from: orthonormal↑ comment by orthonormal · 2014-12-27T02:36:39.423Z · LW(p) · GW(p)
Ah, cool! Forgot about how quasars are hypothesized to work.
comment by Evan_Gaensbauer · 2014-12-16T05:18:37.385Z · LW(p) · GW(p)
[Meta]
In the last 'stupid questions' thread, I posed the suggestion that I write a post called "Non-Snappy Answers to Stupid Questions", which would be a summary post with a list of the most popular stupid questions asked, or stupid questions with popular answers. That is, I'm taking how many upvotes each pair of questions and answers got as an indicator of how many people care about them, or how many people at least thought the answer to a question was a good one. I'm doing this so there will be a single spot where interesting answers can be found, rather than members of LessWrong having to dig through hundreds of comments on multiple threads to discover useful answers to simple questions.
I'll publish this post at the end of December, or beginning of January, when this thread is complete. It could be updated in the future, but, by that point, it will include questions asked from ten separate threads over the course of more than a year, which is a lot. It will include this thread, which will be the most recent.
My question is: how should I organize it? Should I sort questions by topic? By how popular the question was? By how popular the answer was? By some other means? Leave your feedback below.
comment by [deleted] · 2014-12-15T15:56:52.918Z · LW(p) · GW(p)
Back in 2010, Will Newsome posted this as a joke:
Sure, everything you [said] made sense within your frame of reference, but there are no privileged frames of reference. Indeed, proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act. I can't prove anything I just said, which proves my point, depending on whether you think it did or not.
But isn't it actually true?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2014-12-15T16:55:42.513Z · LW(p) · GW(p)
What would I do differently if I believed it was true, or wasn't?
What expectations about future events would I have in one case, that I wouldn't have in the other?
What beliefs about past events would I have in one case, that I wouldn't have in the other?
↑ comment by [deleted] · 2014-12-15T18:54:19.301Z · LW(p) · GW(p)
I understand that this has no decision-making value. I'm only interested in the philosophical meaning of this point.
Replies from: TheOtherDave, IlyaShpitser↑ comment by TheOtherDave · 2014-12-16T01:09:45.315Z · LW(p) · GW(p)
Hm.
Can you say more about what you're trying to convey by "philosophical meaning"?
For example, what is the philosophical meaning of your question?
Replies from: None↑ comment by [deleted] · 2014-12-16T15:52:47.504Z · LW(p) · GW(p)
That if we are to be completely intellectually honest and rigorous, we must accept complete skepticism.
Replies from: TheOtherDave, Viliam_Bur↑ comment by TheOtherDave · 2014-12-16T21:36:12.867Z · LW(p) · GW(p)
Hm.
OK. Thanks for replying, tapping out here.
↑ comment by Viliam_Bur · 2014-12-16T20:35:21.718Z · LW(p) · GW(p)
Maybe we could honestly accept than impossible demands of rigor are indeed impossible. And focus on what is possible.
You can't convince a rock to agree with you on something. There is still some chance with humans.
Replies from: None↑ comment by [deleted] · 2014-12-28T18:16:19.850Z · LW(p) · GW(p)
The Tortoise's mind needs the dynamic of adding Y to the belief pool when X and (X→Y) are previously in the belief pool. If this dynamic is not present—a rock, for example, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y until the end of eternity, without ever getting to Y.
This appears to be a circular argument.
Maybe we could honestly accept than impossible demands of rigor are indeed impossible. And focus on what is possible.
This is why I wrote this:
I understand that this has no decision-making value.
↑ comment by IlyaShpitser · 2014-12-16T00:09:10.122Z · LW(p) · GW(p)
It means you should learn to like learning other languages/ways of thinking.
comment by Punoxysm · 2014-12-10T01:58:09.239Z · LW(p) · GW(p)
If the Bay Area has such a high concentration of rationalists, shouldn't it have more-rational-than-average housing, transportation and legislation?
Sadly, I know the stupid answers to this stupid questions. I just want to vent a bit.
Replies from: NancyLebovitz, fubarobfusco, Lumifer, IlyaShpitser↑ comment by NancyLebovitz · 2014-12-10T04:19:06.033Z · LW(p) · GW(p)
The Bay Area has a high concentration of rationalists compared to most places, but I don't think it's very high compared to the local population. How many rationalists are we talking about?
↑ comment by fubarobfusco · 2014-12-10T02:13:35.059Z · LW(p) · GW(p)
Are rationalists more or less likely than non-rationalists to participate in local government?
↑ comment by IlyaShpitser · 2014-12-12T19:21:34.304Z · LW(p) · GW(p)
Should start with toothpaste first.
comment by advancedatheist · 2014-12-08T19:44:20.270Z · LW(p) · GW(p)
Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? I guess you could call such enhanced people "Operating Objectivists," analogous to the enhanced state promised by another cult.
Interestingly enough Rand seems to make a disclaimer about that in her novel Atlas Shrugged. The philosophy professor character Hugh Akston says of his star students, Ragnar Danneskjold, John Galt and Francisco d'Anconia:
"Don't be astonished, Miss Taggart," said Dr. Akston, smiling, "and don't make the mistake of thinking that these three pupils of mine are some sort of superhuman creatures. They're something much greater and more astounding than that: they're normal men—a thing the world has never seen—and their feat is that they managed to survive as such. It does take an exceptional mind and a still more exceptional integrity to remain untouched by the brain-destroying influences of the world's doctrines, the accumulated evil of centuries—to remain human, since the human is the rational."
But then look at what Rand shows these allegedly "normal men" can do as Operating Objectivists:
Hank Rearden, a kind of self-trained Operating Objectivist who never studied under Akston, can design a new kind of railroad bridge in his mind which exploits the characteristics of his new alloy, even though he has never built a bridge before.
Francisco d'Anconia can deceive the whole world as he depletes his inherited fortune while making everyone believe that he spends his days as a playboy pickup artist, when he in fact he has lived without sex since his youthful sexual relationship with Dagny.
John Galt can build a motor which violates the conservation of energy and the laws of thermodynamics. Oh, and he can also confidently master Dagny's unexpected intrusion into Galt's Gulch despite his secret crush her, his implied adult virginity and his lack of an adult man's skill set for handling women. (You need life experience for that, not education in philosophy.) On top of that, he can survive torture without suffering from post-traumatic stress symptoms.
So despite Rand's disclaimer, if you view Atlas Shrugged as "advertising" for the abilities Rand's philosophy promises as it unlocks your potentials as a "normal man," then the Objectivist organizations which work with this idea implicitly do seem to offer to turn you into a "superhuman creature."
Replies from: Viliam_Bur, fubarobfusco, alienist, NancyLebovitz, NancyLebovitz, gattsuru, mgin, buybuydandavis↑ comment by Viliam_Bur · 2014-12-08T23:10:49.158Z · LW(p) · GW(p)
Seems to me that Rand's model is similar to LessWrong's "rationality as non-self-destruction".
Objectivism in the novels doesn't give the heroes any positive powers. It merely helps them avoid some harmful beliefs and behaviors, which are extremely common. Not burdened by these negative beliefs and behaviors, these "normal men" can fully focus on what they are good at, and if they have high intelligence and make the right choices, they can achieve impressive results.
(The harmful beliefs and behaviors include: feeling guilty for being good at something, focusing on exploiting other people instead of developing one's own skills.)
Hank Rearden's design of a new railroad bridge was completely unrelated to his political beliefs. It was a consequence of his natural talent and hard work, perhaps some luck. The political beliefs only influenced his decision of what to do with the invented technology. I don't remember what exactly were his options, but I think one of them was "archive the technology, to prevent changes in the industry, to preserve existing social order", and as a consequence of his beliefs he refused to consider this option. And even this was before he became a full Objectivist. (The only perfect Objectivist in the novel is Galt; and perhaps the people who later accept Galt's views.)
Francisco d'Anconia's fortune, as you wrote, was inherited. That's a random factor, unrelated to Objectivism.
John Galt's "magical" motor was also a result of his natural talent and hard work, plus some luck. The political beliefs only influenced his decision to hide the motor from public, using a private investor and a secret place.
Violating the law of thermodynamics, and surviving the torture without damage... that's fairy-tale stuff. But I think none of them is an in-universe consequence of Objectivism.
So, what exactly does Objectivism (or Hank Rearden's beliefs, which are partial Objectivism plus some compartmentalization) cause, in-universe?
It makes the heroes focus on their technical skills, and the more enlightened heroes on keeping their technical inventions for themselves. As opposed to attempting a political carreer or serving the existing political powers. Instead of networking, Rearden focuses on studying metal. Instead of donating the magical machine to the government, Galt keeps it secret. Instead of having his fortune taken by government, d'Anconia destroys it... probably because of a lack of smarter alternative (or maybe he somehow secretly preserves a part of his fortune, and ostentatiously destroys the rest to draw away attention; I don't remember the details here).
Without Objectivism, the heroes would most likely become clueless nerds serving the elite, because they couldn't win at the political fight (requires a completely different set of skills that people like Mouch are experts in), but they also wouldn't understand that the system is intentionally designed against them, so they would spend their energy in a futile fight, winning a few battles but losing the war.
Understanding the system allows one to focus on finding an "out of the box" solution. John Galt's victory is his ability to use his natural talent and work to devise a solution where he can live without political masters. He is economically independent, thanks to his magical motor, but also mentally independent. (If we removed the magic, his victory would be understanding the system, and the ability to resist its emotional blackmail and optimize for himself.)
The lack of this understanding made Rearden vulnerable to blackmail from his wife, and in a way cost Eddie Willers his life. (And James Taggart his sanity, if I remember correctly.)
tl;dr: (According to Rand) Objectivism makes you able to understand how the system works, so you can more realistically optimize for your values. Objectivism doesn't give you talent, skills, or luck; but it gives you a chance to use them more efficiently, instead of wasting them in a fight you cannot win.
EDIT: In real life, I expect that an Objectivist training could make people be more aware of their goals and negotiate harder. Maybe increase work ethics.
↑ comment by fubarobfusco · 2014-12-08T21:09:28.556Z · LW(p) · GW(p)
Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? I guess you could call such enhanced people "Operating Objectivists," analogous to the enhanced state promised by another cult.
Not that I'm aware of, but you might also be interested in A. E. Van Vogt's "Null-A" novels, which attempted to do this for a fictionalized version of Korzybski's General Semantics.
(Van Vogt later did become involved in Scientology, as did his (and Hubbard's) editor John W. Campbell.)
↑ comment by alienist · 2014-12-11T04:56:30.038Z · LW(p) · GW(p)
On top of that, he can survive torture without suffering from post-traumatic stress symptoms.
PTSS almost seems like a culture-bound syndrome of the modern West. In particular there don't seem to be any references to it before WWI and even there (and in subsequent wars) all the references seem to be from the western allies. Furthermore, the reaction to "shell shock", as it was then called, during WWI suggests that this was something new that the established structures didn't know how to deal with.
Replies from: NancyLebovitz, bogus↑ comment by NancyLebovitz · 2014-12-11T17:54:15.689Z · LW(p) · GW(p)
Not everyone who's had traumatic experiences has PTSD.
The scientists have a theory, and it has to do with the root causes of PTSD, previously undocumented. As compared with the resilient Danish soldiers, all those who developed PTSD were much more likely to have suffered emotional problems and traumatic events prior to deployment. In fact, the onset of PTSD was not predicted by traumatic war experiences but rather by childhood experiences of violence, especially punishment severe enough to cause bruises, cuts, burns and broken bones. PTSD sufferers were also more likely to have witnessed family violence and to have experienced physical attacks, stalking or death threats by a spouse. They also more often had past experiences that they could not, or would not, talk about.
↑ comment by bogus · 2014-12-11T09:32:20.232Z · LW(p) · GW(p)
PTSS almost seems like a culture-bound syndrome of the modern West.
There are significant confounders here, as modern science-based psychology got started around the same time - and WWI really was very different from earlier conflicts, not least in its sheer scale. But the idea is nonetheless intriguing; the West really is quite different from traditional societies, along lines that could plausibly make folks more vulnerable to traumatic shock.
↑ comment by NancyLebovitz · 2014-12-08T21:22:33.085Z · LW(p) · GW(p)
For what it's worth, Rand was an unusually capable person in her specialty (she wrote two popular, and somewhat politically influential novels in her second language), but still not in the same class as her heroes.
I'm not sure you've got the bit about Rearden right. I don't think there's any evidence that he came up with the final design for the bridge. There's a mention that he worked with a team to discover Rearden metal, and presumably he also had an engineering team. The point was that he (presumably) knew enough engineering to come up with something plausible, and that he was fascinated by producing great things enough to be distracted from something major going wrong that I don't remember.
I have no idea whether Rand knew Galt's engine was physically impossible, though I think she should have, considering that other parts of the book were well-researched. Dagny's situation at Taggart Transcontinental was probably typical for an Operations vice-president in a family owned business. The description of her doing cementless masonry matched with a book on the subject. Atlas Shrugged was the only place I saw the possibility of shale oil mentioned until, decades later, it turned out to be a possible technology.
Replies from: CBHacking↑ comment by CBHacking · 2014-12-08T22:27:43.567Z · LW(p) · GW(p)
The research fail that jumped out at me hardest in Atlas Shrugged was the idea that so many people would consider a metal both stronger and lighter than steel physically impossible. By the time the book was published, not only was titanium fairly well understood, it was also being widely used in military and (some; what could be spared from Cold War efforts) commercial purposes. Its properties don't exactly match Rearden Metal (even ignoring the color and other mostly-unimportant characteristic) but they're close enough that it should be obvious that such materials are completely possible. Of course, that part of the book also talks about making steel rails last longer by making them denser, which seems completely bizarre to me; there are ways to increase the hardness of steel, but they involve things like heat-treating it.
TL;DR: I'm not sure I'd call the book "well-researched" as a whole, though some parts may well have been.
Replies from: Alsadius↑ comment by NancyLebovitz · 2014-12-12T22:10:27.124Z · LW(p) · GW(p)
The three people Akston was talking about didn't include Rearden. They were D'Anconia, Galt, and Danneskjold (the mostly off-stage pirate). I feel as though I've lost, not just geek points, but objectivist points both for forgetting something from the book, but also because I went along with everyone else who got it wrong.
The remarkable thing about Galt and torture isn't that he didn't get PTSD, it's that he completely kept his head, and over-awed his torturers. He broke James Taggart's mind, not that Taggart's mind was in such great shape to begin with.
↑ comment by gattsuru · 2014-12-08T21:27:18.622Z · LW(p) · GW(p)
A number of these matters seem more narrative or genre conveniences : Francisco acts a playboy in the same way Bruce Wayne does, Rearden's bridge development passes a lot of work to his specialist engineers (similarly to Rearden metal having a team of scientists skeptically helping him) and pretends that the man is still a one-man designer (among other handwaves). At the same time, Batman is not described as a superhuman engineer or playboy, nor would he act as those types of heroes. I'm also not sure we can know the long-term negative repercussions John Galt experiences given the length of the book, and not all people who experience torture display clinically relevant post-traumatic stress symptoms and many who do show them only sporadically. His engine is based on now-debunked theories of physics that weren't so obviously thermodynamics-violating at the time, similarly to Project Xylophone.
These men are intended to be top-of-field capability from the perspective of a post-Soviet writer who knew little about their fields and could easily research less. Many of the people who show up under Galt's tutelage are similarly exceptionally skilled, but even more are not so hugely capable.
On the other hand, the ability of her protagonists to persuade others and evaluate the risk of getting shot starts at superhuman and quickly becomes ridiculous.
On the gripping hand, I'm a little cautious about emphasizing fictional characters and acknowledgedly Heroic abilities as evidence, especially when the author wrote a number of non-fiction philosophy texts related to this topic.
↑ comment by buybuydandavis · 2014-12-08T20:36:29.990Z · LW(p) · GW(p)
Not quite in the spirit of admitting ignorance, but since it's in this thread, I'll answer it.
Did organized Objectivist activism, at least in some of its nuttier phases, offer to turn its adherents who get it right into a kind of superhuman entity? ...
another cult
No.
So despite Rand's disclaimer, if you view....
So despite what Rand or any Objectivist ever said or did, if you choose to view Objectivism as a nutty cult, you can.
If you were actually interested in why Rand's characters are the way they are, you could read her book on art, "The Romantic Manifesto". Probably a quick google search on the book would give you your answer.