Posts
Comments
I don't understand that screenshot at all (maybe the resolution is too low?), but from your description it sounds in a similar vein to Zendo and Eleusis and Penultima, which you could get ideas from. Yours seems different though, and I'd be curious to know more details. I tried implementing some single-player variants of Zendo five years ago, though they're pretty terrible (boring, no graphics, probably not useful for training rationality).
I do think there's some potential for rationality improvements from games, though insofar as they're optimized for training rationality, they won't be as fun as games optimized purely for being fun. I also think it'll be very difficult to achieve transfer to life-in-general, for the same reason that learning to ride a bike doesn't train you to move your feet in circles every time you sit in a chair. ("I pedal when I'm on a bike, to move forward; why would I pedal when I'm not on a bike, and my goal isn't to move forward? I reason this way when I'm playing this game, to get the right answer; why would I reason this way when I'm not playing the game, and my goal is to seem reasonable or to impress people or to justify what I've already decided?")
Don't let them tell us stories. Don't let them say of the man sentenced to death "He is going to pay his debt to society," but: "They are going to cut off his head." It looks like nothing. But it does make a little difference.
-- Camus
"From the fireside house, President Reagan suddenly said to me, 'What would you do if the United States were suddenly attacked by someone from outer space? Would you help us?'
"I said, 'No doubt about it.'"
"He said, 'We too.'"
- Mikhail Gorbachev, from an interview
You assume utility of getting neither is 0 both before and after the transformation. You need to transform that utility too, eg from 0 to 4.
Lost 35lbs in the past 4 months, currently at 11.3% body fat, almost at my goal of ~9% body fat, at which point I'll start bulking again. Average body fat is ~26% for men in my age group. My FFMI (= non-fat mass / height^2) is still somewhere above 95th percentile for men in my age group.
Got an indoor treadmill, have since walked 1100km in the past 2 months, 18km/day, 4.5hour/day average. Would definitely recommend this.
Scored 2 points short of perfect on the GRE. Got a 3.8 average for college courses over the past year.
You usually avoid unlimited liability by placing a stop order to cover your position as soon as the price goes sufficiently high. Or for instance you can bound your losses by including a term in the contract which says that instead of giving back the stock you borrowed and sold, you can pay a certain price.
Introspecting, the way I remember this is that 1 is a simple number, and type 1 errors are errors that you make by being stupid in a simple way, namely by being gullible. 2 is a more sophisticated number, and type 2 errors are ones you make by being too skeptical, which is a more sophisticated type of stupidity. I do most simple memorization (e.g. memorizing differentiation rules) with this strategy of "rationalizing why the answer makes sense". I think your method is probably better for most people, though.
Whether they believe your confidence vs whether they believe their own evidence about your value. If a person is confident, either he's low-value and lying about it, or he's high-value and honest. The modus ponens/tollens description is unclear, I think I only used it because it's a LW shibboleth. (Come to think of it, "shibboleth" is another LW shibboleth.)
Sunlight increases risk of melanoma but decreases risk of other, more deadly cancers. If you're going to get, say, 3 times your usual daily sunlight exposure, then sunscreen is probably a good idea, but otherwise it's healthier to go without. I'd guess a good heuristic is to get as much sunlight as your ancestors from 1000 years ago would have gotten.
You don't need to reconstruct all the neurons and synapses, though. If something behaves almost exactly as I would behave, I'd say that thing is me. 20 years of screenshots 8 hours a day is around 14% of a waking lifetime, which seems like enough to pick out from mindspace a mind that behaves very similarly to mine.
Confidence is the alief that you have high value, and it induces confidence-signalling behaviors. People judge your value partly by actually looking at your value, but they also take the shortcut of just directly looking at whether you display those signals. So you can artificially inflate your status by having incorrect confidence, i.e. alieving that you're more valuable than you really are. This is called hubris, and when people realize you're doing it they reduce their valuation of you to compensate. (Or sometimes they flip that modus tollens into a modus ponens, and you become a cult leader. So it's polarizing.)
But it is a prosocial lie that you should have incorrect underconfidence, i.e. that you should alieve that you have lower value than you actually do. This is called humility, and it's prosocial because it allows the rest of society to exploit you, taking more value from you than they're actually paying for. Since it's prosocial, society paints humility as a virtue, and practically all media, religious doctrine, fiction, etc. repeatedly insists that humility (and good morals in general) will somehow cause you to win in the end. You have to really search to find media where the evil, confident guys actually crush the good guys. So if this stuff has successfully indoctrinated you (and if you're a nerd, it probably has), then you should adjust your confidence upwards to compensate, and this will feel like hubris relative to what society encourages.
Also, high confidence has lower drawbacks nowadays than what our hindbrains were built to expect. People share less background, so it's pretty easy to reinvent yourself. You don't know people for as long, so you have less time for people to get tired of your overconfidence. You're less likely to get literally killed for challenging the wrong people. So a level of confidence which is optimal in the modern world will feel excessive to your hindbrain.
Allow the AI to reconstruct your mind and memories more accurately and with less computational cost, hopefully; the brain scan and DNA alone probably won't give much fidelity. They're also fun from a self-tracking data analysis perspective, and they let you remember your past better.
- Getting an air filter can gain you ~0.6 years of lifespan, plus some healthspan. Here's /u/Louie's post where I saw this.
- Lose weight. Try Shangri-La, and if that doesn't work consider the EC stack or a ketogenic diet.
- Seconding James_Miller's recommendation of vegetables, especially cruciferous vegetables (broccoli, bok choy, cauliflower, collard greens, arugula...) Just eat entire plates of the stuff often.
- Write a script that takes a screenshot and webcam picture every 30 seconds. Save the files to an external hard drive. After a few decades, bury the external, along with some of your DNA and possibly brain scans, somewhere it'll stay safe for a couple hundred years or longer. This is a pretty long shot, but there's a chance that a future FAI will find your horcrux and use it to resurrect you. I think this is a better deal than cryonics since it costs so much less.
Which country should software engineers emigrate to?
I'm going to research everything, build a big spreadsheet, weight the various factors, etc. over the next while, so any advice that saves me time or improves the accuracy of my analysis is much appreciated. Are there any non-obvious considerations here?
There are some lists of best countries for software developers, and for expats in general. These consider things like software dev pay, cost of living, taxes, crime, happiness index, etc. Those generally recommend Western Europe, the US, Canada, Israel, Australia, New Zealand, Singapore, Hong Kong, Mexico, India. Other factors I'll have to consider are emigration difficulty and language barriers.
The easiest way to emigrate is to marry a local. Otherwise, emigrating to the US requires either paying $50k USD, or working in the US for several years (under a salary reduction and risk that are about as bad as paying $50k), and other countries are roughly as difficult. I'll have to research this separately for each country.
Yes, the effect of diets on weight-loss is roughly mediated by their effect on caloric intake and expenditure. But this does not mean that "eat fewer calories and expend more" is good advice. If you doubt this, note that the effect of diets on weight-loss is also mediated by their effects on mass, but naively basing our advice on conservation of mass causes us to generate terrible advice like "pee a lot, don't drink any water, and stay away from heavy food like vegetables".
The causal graph to think about is "advice → behavior → caloric balance → long-term weight loss", where only the advice node is modifiable when we're deciding what advice to give. Behavior is a function of advice, not a modifiable variable. Empirically, the advice "eat fewer calories" doesn't do a good job of making people eat fewer calories. Empirically, advice like "eat more protein and vegetables" or "drink olive oil between meals" does do a good job of making people eat fewer calories. The fact that low-carb diets "only" work by reducing caloric intake does not mean that low-carb diets aren't valuable.
Here's an SSC post and ~700 comments on cultural evolution: http://slatestarcodex.com/2015/07/07/the-argument-from-cultural-evolution/
Replace "if you don't know" with "if you aren't told". If you believe 80% of them are easy, then you're perfectly calibrated as to whether or not a question is easy, and the apparent under/overconfidence remains.
About that survey... Suppose I ask you to guess the result of a biased coin which comes up heads 80% of the time. I ask you to guess 100 times, of which ~80 times the right answer is "heads" (these are the "easy" or "obvious" questions) and ~20 times the right answer is "tails" (these are the "hard" or "surprising" questions). Then the correct guess, if you aren't told whether a given question is "easy" or "hard", is to guess heads with 80% confidence, for every question. Then you're underconfident on the "easy" questions, because you guessed heads with 80% confidence but heads came up 100% of the time. And you're overconfident on the "hard" questions, because you guessed heads with 80% confidence but got heads 0% of the time.
So you can get apparent under/overconfidence on easy/hard questions respectively, even if you're perfectly calibrated, if you aren't told in advance whether a question is easy or hard. Maybe the effect Yvain is describing does exist, but his post does not demonstrate it.
Every child has both a mother and a father, and there are about as many men as women, so the mean number of children is about the same for males as for females. But there are more childless men than childless women, because polygyny is more common than polyandry, ultimately because of Bateman's principle.
Depends on your feature extractor. If you have a feature that measures similarity to previously-seen films, then yes. Otherwise, no. If you only have features measuring what each film's about, and people like novel films, then you'll get conservative predictions, but that's not really the same as learning that novelty is good.
Good point. I may be thinking about this wrong, but I think Deutsch self-consistent time travel would still vastly concentrate measure in universes where time travel isn't invented, because unless the measures are exactly correct then the universe is inconsistent. Whereas Novikov self-consistent time travel makes all universes with paradoxes inconsistent, Deutsch self-consistent time travel merely makes the vast majority of them inconsistent. It's a bit like quantum suicide: creating temporal paradoxes seems to work because it concentrates your measure in universes where it does work, but it also vastly reduces your total measure.
Playing devil's advocate: Archaic spelling rules allow you to quickly gauge other people's intelligence, which is useful. It causes society to respect stupid people less, by providing objective evidence of their stupidity.
But I don't actually think the benefits outweigh the costs there, and the signal is confounded by things like being a native English-speaker.
If humanity did this, at least some of us would still want to spread out in the real universe, for instance to help other civilizations. (Yes, the world inside the computer is infinitely more important than real civilizations, but I don't think that matters.)
Also, if these super-Turing machines are possible, and the real universe is finite, then we are living in a simulation with probability 1, because you could use them to simulate infinitely many observer-seconds.
Suppose backward time travel is possible. If so, it's probably of the variety where you can't change the past (i.e. Novikov self-consistent), because that's mathematically simpler than time travel which can modify the past. In almost all universes where people develop time travel, they'll counterfactualize themselves by deliberately or accidentally altering the past, i.e. they'll "cause" their universe-instance to not exist in the first place, because that universe would be inconsistent if it existed. Therefore in most universes that allow time travel and actually exist, almost all civilizations will fail to develop time travel, which might happen because those civilizations die out before they become sufficiently technologically advanced.
Perhaps this is the Great Filter. It would look like the Great Filter is nuclear war or disease or whatever, but actually time-consistency anthropics are "acausing" those things.
This assumes that either most civilizations would discover time travel before strong AI (in the absence of anthropic effects), or strong AI does not rapidly lead to a singleton. Otherwise, the resulting singleton would probably recognize that trying to modify the past is acausally risky, so the civilization would expand across space without counterfactualizing itself, so time-consistency couldn't be the Great Filter. They would probably also seek to colonize as much of the universe as they could, to prevent less cautious civilizations from trying time-travel and causing their entire universe to evaporate in a puff of inconsistency.
This also assumes that a large fraction of universes allow time travel. Otherwise, most life would just end up concentrated in those universes that don't allow time travel.
Interesting. Very small concentrations of the chemical would have to sterilize practically everyone they contacted - else it would just cause humanity to very rapidly evolve resistance, or maybe kill off the developed world.
Reminds me of the decline in testosterone levels over the past couple decades, which might be due to endocrine-disrupting compounds in the water supply and in plastics and food, but which hasn't been enough to sterilize much of the population.
I think two-boxing in your modified Newcomb is the correct answer. In the smoking lesion, smoking is correct, so there's no contradiction.
One-boxing is correct in the classic Newcomb because your decision can "logically influence" the fact of "this person one-boxes". But your decision in the modified Newcomb can't logically influence the fact of "this person has the two-boxing gene".
Random thing that I can't recall seeing on LW: Suppose A is evidence for B, i.e. P(B|A) > P(B). Then by Bayes, P(A|B) = P(A)P(B|A)/P(B) > P(A)P(B)/P(B) = P(A), i.e. B is evidence for A. In other words, the is-evidence-for relation is symmetric.
For instance, this means that the logical fallacy of affirming the consequent (A implies B, and B is true, therefore A) is actually probabilistically valid. "If Socrates is a man then he'll probably die; Socrates died, therefore it's more likely he's a man."
Maybe the differentiable physics we observe is just an approximation of a lower-level non-differentiable physics, the same way Newtonian mechanics is an approximation of relativity.
If physics is differentiable, that's definitely evidence, by symmetry of is-evidence-for. But I have no idea how strong this evidence is because I don't know the distribution of the physical laws of base-level universes (which is a very confusing issue). Do "most" base-level universes have differentiable physics? We know that even continuous functions "usually" aren't differentiable, but I'm not sure whether that even matters, because I have no idea how it's "decided" which universes exist.
Also, maybe intelligence is less likely to arise in non-differentiable universes. But if so, it's probably just a difference of degree of probability, which would be negligible next to the other issues, which seem like they'd drive the probability to almost exactly 0 or 1.
Perhaps they create lots of children, let most of them die shortly after being born (perhaps by fighting each other), and then invest heavily in the handful that remain. Once food becomes abundant, some parents elect not to let most of their children die, leading to a population boom.
In fact, if you squint a little, humans already demonstrate this: men produce large numbers of sperm, which compete to reach the egg first. Perhaps that would have led to exactly this Malthusian disaster, if it weren't for the fact that women only have a single egg to be fertilized, and sperm can't grow to adulthood on their own.
Agreed. But the Great Filter could consist of multiple Moderately Great Filters, of which the Malthusian trap could be one. Or perhaps there could be, say, only n Quite Porous Filters which each eliminate only 1/n of civilizations, but that happen to be MECE (mutually exclusive and collectively exhaustive), so that together they eliminate all civilizations.
I think you may have oversimplified bio-engineering to suggest it could arise in such a way before advanced technology.
I think it could be accomplished with quite primitive technology, especially if the alien biology is robust, and if you just use natural brains rather than trying to strip them down to minimize food costs (which would also make them more worthy of moral consideration). Current human technology is clearly sufficient: humans have already kept isolated brains alive, and used primitive biological brains to control robots. If you connect new actuators or sensors to a mammalian brain, it uses them just fine after a short adaptation period, and it seems likely alien brains would work the same.
I'd agree that the brains of very primitive animals, or brains that have been heavily stripped down specifically to, say, operate a traffic light, aren't really worthy of moral consideration. But you'd probably need more intelligent brains for complex tasks like building cars or flying planes, and those probably are worthy of moral consideration - stripping out sapience while leaving sufficient intelligence might be impossible or expensive.
Could Malthusian tragedy be the Great Filter? Meaning, maybe most civilizations, before they develop AGI or space colonization, breed so much that everyone is too busy trying to survive and reproduce to work on AGI or spaceflight, until a supernova or meteor or plague kills them off.
Since humans don't seem to be headed into this trap, alien species who do fall into this trap would have to differ from humans. Some ways this might happen:
- They're r-selected like insects, i.e. their natural reproduction process involves creating lots of children and then allowing most to die. Once technology makes resources abundant, most of the children survive, leading to an extreme population boom. This seems unlikely, since intelligence is more valuable to species that have few children and invest lots of resources in each child.
- Their reproduction mechanism does not require a 9-month lead time like humans' do; maybe they take only one day to produce a small egg, which then grows externally to the body. This would mean one wealthy alien that wants a lot of children could very quickly create very many children, rapidly causing the population's mean desire-for-children to skyrocket.
- Their lifespans are shorter, so evolution more quickly "realizes" that there's an abundance of resources, and thus the aliens evolve to reproduce a lot. The shorter lifespan would also produce a low ceiling on technological progress, since children would have to be brought up to speed on current science before they can discover new science. This seems unlikely because intelligence benefits from long lifespans.
- Evolution programs them to desperately want to maximize the number of fit children they have, even before they develop civilization. Evolution didn't do this to humans - why not?
Human technological progress doesn't seem to be as fast as it can be, though, which suggests that there's a lot of "slack" time in which civilizations can develop technologically before evolving to be more Malthusian.
Some of the disgust definitely derives from the imagery, but I think much of it is valid too. Imagine the subjective experience of the car-builder brain. It spends 30 years building cars. It has no idea what cars do. It has never had a conversation or a friend or a name. It has never heard a sound or seen itself. When it makes a mistake it is made to feel pain so excruciating it would kill itself if it could, but it can't because its actuators' range of motion is too limited. This seems far worse than the lives of humans in our world.
By "would these civilizations develop biological superintelligent AGI" I meant more along the lines of whether such a civilization would be able to develop a single mind with general superintelligence, not a "higher-order organism" like science. Though I think that depends on too many details of the hypothetical world to usefully answer.
I'm surprised that nobody's pointed out the dual phenomenon of "yay fields", whereby a pleasurable stimulus's affect is transferred to its antecedents.
The field of behavior modification calls this "conditioning", and "higher-order conditioning" if the chain has more than two stimuli.
First, I'd predict that much of the observed correlation between technical proficiency and wealth is just because both of them require some innate smarts. In general, I'm suspicious of claims that some field develops "transferable reasoning abilities", partly because people keep using that to rationalize their fiction-reading or game-playing or useless college degrees. I'm worried that math and physics and theoretical CS are just nerd-snipery / intellectual porn, and we're trying to justify spending time on them by pretending they're in line with our "higher" values (like improving the world), not only with our "lower" values (like intellectual enjoyment).
Second, if technical proficiency does build transferable reasoning ability, I'd expect the overall benefit to be small, much smaller than from, say, spending that time working on whatever contributes most to your goals (which will usually not be building technical proficiency, because the space of all actions is big). You should always be trying to take the optimal action, not a random "beneficial" action, or else you'll spend your time mowing lawns for $10/hour.
Edit: I think this comment is too hostile. Sorry. I do agree that learning technical skills is often worthwhile.
Turing machines are a big deal because when you change the definition of a Turing machine (by letting it use arbitrarily many symbols, or giving it multiple tapes or a multi-dimensional tape, or letting it operate nondeterministically, or making its tape finite in one direction...) it usually can still solve exactly the same set of problems, which strongly suggests that Turing completeness is a "natural concept". A lot of computational systems are Turing-complete, and all of the discrete computational systems we've built are no more powerful than Turing machines. Physical reality might also be Turing-computable (Church-Turing-Deutsch principle), though we don't know that for sure (for instance, physics might involve infinite-precision calculations on non-computable numbers).
Upgraded reflective senses would be really cool. For instance:
- Levels of various interesting hormones like cortisol, epinephrine, testosterone, etc. For instance, cortisol levels are higher in the morning than in the evening, but this is not obvious. (Or am I lying to prevent hindsight bias?)
- Various things measured by an implanted EEG. For instance, it would be cool to intuitively know the difference between beta and gamma waves.
- Metabolism-related things like blood insulin, glucose, ketones.
- Galvanic skin response. Heart rate variability.
We already have weak senses for most of these, but they're not always salient. Having a constant sense of them would allow you to do biofeedback-like training all the time.
Clicking on the tag "open thread" on this post only shows open threads from 2011 and earlier, at "http://lesswrong.com/tag/open_thread/". If I manually enter "http://lesswrong.com/r/discussion/tag/open_thread/", then I get the missing open threads. The problem appears to be that "http://lesswrong.com/tag/whatever/" only shows things posted to Main. "http://lesswrong.com/r/all/tag/open_thread/" seems to behave the same as "http://lesswrong.com/tag/open_thread/", i.e. it only shows things posted to Main, despite the "/r/all". Going to "article navigation → by tag" also goes to an open thread from 2011, so it seems to also ignore things posted to Discussion.
I don't think a shutdown is even remotely likely. LW is still the Schelling point for rationalist discussion; Roko-gate will follow us regardless; SSC/Gwern.net are personal blogs with discussion sections that are respectively unusable and nonexistent. CFAR is still an IRL thing, and almost all of MIRI/CFAR's fans have come from the internet.
Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.
The null hypothesis is always false, and effect sizes are never zero. When he says it's zero you should probably interpret zero as "too small to care about" or "much smaller than most people think". I'll bet the studies didn't say the effect was literally zero, they just said that the effect isn't statistically significant, which is really just saying the effect and the sample size were too small to pass their threshold.
People say a lot of things that aren't literally true, because adding qualifiers everywhere gets annoying. Of course if he doesn't realize that there are implicit qualifiers, then he's mistaken.
That study is observational, not experimental. Maybe genes for disagreeableness make parents abuse their children, and they pass those genes on to their offspring. Probably both nature and nurture contribute.
Probably gotten most of the responses it was going to get, so here's a scatter plot:
People seem to think it's worse the more they know about it (except those who know nothing seem slightly more pessimistic than those who know only a little).
Made by running this in IPython (after "import pandas as pd" and "from numpy.random import randn" in .pythonstartup):
!sed "/^#/d" poll.csv >poll-clean.csv
pd.read_csv("poll-clean.csv", names=["user", "pollid", "response", "date"])
_.pivot_table("response", ["user"], ["pollid"])
_ + 0.1*randn(*_.shape) # jitter
_.plot(kind="scatter", x=906, y=907)
plt.xlabel("Net loss.....Net benefit")
plt.ylabel("Nothing.....Expert")
Agreed, considering "EEA" to mean the African savannah. So for instance if your ancestry is European and you're currently living in California you don't need to spend very much time outside, and if you're dark-skinned and living at a high latitude you should try to get lots of sunlight.
In the Sleeping Beauty problem, SIA and SSA disagree on the probability that it's Monday or Tuesday. But if we have to bet, then the optimal bet depends on what Ms Beauty is maximizing - the number of bet-instances that are correct, or whether the bet is correct, counting the two bets on different days as the same bet. Once the betting rules are clarified, there's always only one optimal way to bet, regardless of whether you believe SIA or SSA.
Moreover, one of those bet scenarios leads to bets that give "implied beliefs" that follow SIA, and the other gives "implied beliefs" that follow SSA. This suggests that we should taboo the notion of "beliefs", and instead talk only about optimal behavior. This is the "phenomenalist position" on Sleeping Beauty, if I understand correctly.
Question 1: Is this correct? Is this roughly the conclusion all those LW discussions a couple years ago came to?
Question 2: Does this completely resolve the issue, or must we still decide between SIA and SSA? Are there scenarios where optimal behavior depends on whether we believe SIA or SSA even after the exact betting rules have been specified?
Is transcranial direct current stimulation technology yet at the point where someone who starts it has higher expected gains than costs? I.e., should more LWers be using it? You can comment and/or answer this poll:
Do you think the average LWer would get a net benefit from using tDCS, taking into account the benefits, costs of equipment, risks, etc.? [pollid:906] How much do you know about this topic? [pollid:907]
I've been doing the same thing for ~40 minutes of daily peak sunlight, because of heuristics ("make your environment more like the EEA") and because there's evidence it improves mood and cognitive functioning (e.g.). The effect isn't large enough to be noticeable. Sunlight increases risk of skin cancer, but decreases risks of other, less-survivable cancers more; I'm not sure how much of the cancer reduction you could get from taking D3 and not getting sunlight. I guess none of that actually answers your question.
Raymond Smullyan calls these sorts of puzzles (where characters' ability to solve the puzzle is used by the reader to solve the puzzle) "metapuzzles". There are some more examples in his books.
- Go to his first article, then in the "Article Navigation" menu use the "by author" arrows.
- Go to lesswrong.com/user/Yvain, go to the last page (by clicking "next" or by changing the URL in some way), then go back one page at a time.
Haven't tested either of those, but they should work.
Thanks for the feedback! I'm intending to go into industry, not academia, but this is still helpful.